Logical Replication of sequences

Started by Amit Kapilaover 1 year ago496 messages
#1Amit Kapila
amit.kapila16@gmail.com

In the past, we have discussed various approaches to replicate
sequences by decoding the sequence changes from WAL. However, we faced
several challenges to achieve the same, some of which are due to the
non-transactional nature of sequences. The major ones were: (a)
correctness of the decoding part, some of the problems were discussed
at [1]/messages/by-id/e4145f77-6f37-40e0-a770-aba359c50b93@enterprisedb.com[2]/messages/by-id/CAA4eK1Lxt+5a9fA-B7FRzfd1vns=EwZTF5z9_xO9Ms4wsqD88Q@mail.gmail.com[3]/messages/by-id/CAA4eK1KR4=yALKP0pOdVkqUwoUqD_v7oU3HzY-w0R_EBvgHL2w@mail.gmail.com (b) handling of sequences especially adding certain
sequences automatically (e.g. sequences backing SERIAL/BIGSERIAL
columns) for built-in logical replication is not considered in the
proposed work [1]/messages/by-id/e4145f77-6f37-40e0-a770-aba359c50b93@enterprisedb.com (c) there were some performance concerns in not so
frequent scenarios [4]/messages/by-id/12822961-b7de-9d59-dd27-2e3dc3980c7e@enterprisedb.com (see performance issues), we can probably deal
with this by making sequences optional for builtin logical replication

It could be possible that we can deal with these and any other issues
with more work but as the use case for this feature is primarily major
version upgrades it is not clear that we want to make such a big
change to the code or are there better alternatives to achieve the
same.

This time at pgconf.dev (https://2024.pgconf.dev/), we discussed
alternative approaches for this work which I would like to summarize.
The various methods we discussed are as follows:

1. Provide a tool to copy all the sequences from publisher to
subscriber. The major drawback is that users need to perform this as
an additional step during the upgrade which would be inconvenient and
probably not as useful as some built-in mechanism.
2. Provide a command say Alter Subscription ... Replicate Sequences
(or something like that) which users can perform before shutdown of
the publisher node during upgrade. This will allow copying all the
sequences from the publisher node to the subscriber node directly.
Similar to previous approach, this could also be inconvenient for
users.
3. Replicate published sequences via walsender at the time of shutdown
or incrementally while decoding checkpoint record. The two ways to
achieve this are: (a) WAL log a special NOOP record just before
shutting down checkpointer. Then allow the WALsender to read the
sequence data and send it to the subscriber while decoding the new
NOOP record. (b) Similar to the previous idea but instead of WAL
logging a new record directly invokes a decoding callback after
walsender receives a request to shutdown which will allow pgoutput to
read and send required sequences. This approach has a drawback that we
are adding more work at the time of shutdown but note that we already
waits for all the WAL records to be decoded and sent before shutting
down the walsender during shutdown of the node.

Any other ideas?

I have added the members I remember that were part of the discussion
in the email. Please feel free to correct me if I have misunderstood
or missed any point we talked about.

Thoughts?

[1]: /messages/by-id/e4145f77-6f37-40e0-a770-aba359c50b93@enterprisedb.com
[2]: /messages/by-id/CAA4eK1Lxt+5a9fA-B7FRzfd1vns=EwZTF5z9_xO9Ms4wsqD88Q@mail.gmail.com
[3]: /messages/by-id/CAA4eK1KR4=yALKP0pOdVkqUwoUqD_v7oU3HzY-w0R_EBvgHL2w@mail.gmail.com
[4]: /messages/by-id/12822961-b7de-9d59-dd27-2e3dc3980c7e@enterprisedb.com

--
With Regards,
Amit Kapila.

#2Bharath Rupireddy
bharath.rupireddyforpostgres@gmail.com
In reply to: Amit Kapila (#1)
Re: Logical Replication of sequences

Hi,

On Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

3. Replicate published sequences via walsender at the time of shutdown
or incrementally while decoding checkpoint record. The two ways to
achieve this are: (a) WAL log a special NOOP record just before
shutting down checkpointer. Then allow the WALsender to read the
sequence data and send it to the subscriber while decoding the new
NOOP record. (b) Similar to the previous idea but instead of WAL
logging a new record directly invokes a decoding callback after
walsender receives a request to shutdown which will allow pgoutput to
read and send required sequences. This approach has a drawback that we
are adding more work at the time of shutdown but note that we already
waits for all the WAL records to be decoded and sent before shutting
down the walsender during shutdown of the node.

Thanks. IIUC, both of the above approaches decode the sequences during
only shutdown. I'm wondering, why not periodically decode and
replicate the published sequences so that the decoding at the shutdown
will not take that longer? I can imagine a case where there are tens
of thousands of sequences in a production server, and surely decoding
and sending them just during the shutdown can take a lot of time
hampering the overall server uptime.

--
Bharath Rupireddy
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com

#3Amit Kapila
amit.kapila16@gmail.com
In reply to: Bharath Rupireddy (#2)
Re: Logical Replication of sequences

On Tue, Jun 4, 2024 at 4:53 PM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:

On Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

3. Replicate published sequences via walsender at the time of shutdown
or incrementally while decoding checkpoint record. The two ways to
achieve this are: (a) WAL log a special NOOP record just before
shutting down checkpointer. Then allow the WALsender to read the
sequence data and send it to the subscriber while decoding the new
NOOP record. (b) Similar to the previous idea but instead of WAL
logging a new record directly invokes a decoding callback after
walsender receives a request to shutdown which will allow pgoutput to
read and send required sequences. This approach has a drawback that we
are adding more work at the time of shutdown but note that we already
waits for all the WAL records to be decoded and sent before shutting
down the walsender during shutdown of the node.

Thanks. IIUC, both of the above approaches decode the sequences during
only shutdown. I'm wondering, why not periodically decode and
replicate the published sequences so that the decoding at the shutdown
will not take that longer?

Even if we decode it periodically (say each time we decode the
checkpoint record) then also we need to send the entire set of
sequences at shutdown. This is because the sequences may have changed
from the last time we sent them.

I can imagine a case where there are tens
of thousands of sequences in a production server, and surely decoding
and sending them just during the shutdown can take a lot of time
hampering the overall server uptime.

It is possible but we will send only the sequences that belong to
publications for which walsender is supposed to send the required
data. Now, we can also imagine providing option 2 (Alter Subscription
... Replicate Sequences) so that users can replicate sequences before
shutdown and then disable the subscriptions so that there won't be a
corresponding walsender.

--
With Regards,
Amit Kapila.

#4Ashutosh Bapat
ashutosh.bapat.oss@gmail.com
In reply to: Amit Kapila (#1)
Re: Logical Replication of sequences

On Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

3. Replicate published sequences via walsender at the time of shutdown
or incrementally while decoding checkpoint record. The two ways to
achieve this are: (a) WAL log a special NOOP record just before
shutting down checkpointer. Then allow the WALsender to read the
sequence data and send it to the subscriber while decoding the new
NOOP record. (b) Similar to the previous idea but instead of WAL
logging a new record directly invokes a decoding callback after
walsender receives a request to shutdown which will allow pgoutput to
read and send required sequences. This approach has a drawback that we
are adding more work at the time of shutdown but note that we already
waits for all the WAL records to be decoded and sent before shutting
down the walsender during shutdown of the node.

Any other ideas?

In case of primary crash the sequence won't get replicated. That is true
even with the previous approach in case walsender is shut down because of a
crash, but it is more serious with this approach. How about periodically
sending this information?

--
Best Wishes,
Ashutosh Bapat

#5Yogesh Sharma
yogesh.sharma@catprosystems.com
In reply to: Amit Kapila (#1)
Re: Logical Replication of sequences

On 6/4/24 06:57, Amit Kapila wrote:

1. Provide a tool to copy all the sequences from publisher to
subscriber. The major drawback is that users need to perform this as
an additional step during the upgrade which would be inconvenient and
probably not as useful as some built-in mechanism.

Agree, this requires additional steps. Not a preferred approach in my
opinion. When a large set of sequences are present, it will add
additional downtime for upgrade process.

2. Provide a command say Alter Subscription ... Replicate Sequences
(or something like that) which users can perform before shutdown of
the publisher node during upgrade. This will allow copying all the
sequences from the publisher node to the subscriber node directly.
Similar to previous approach, this could also be inconvenient for
users.

This is similar to option 1 except that it is a SQL command now. Still
not a preferred approach in my opinion. When a large set of sequences are
present, it will add additional downtime for upgrade process.

3. Replicate published sequences via walsender at the time of shutdown
or incrementally while decoding checkpoint record. The two ways to
achieve this are: (a) WAL log a special NOOP record just before
shutting down checkpointer. Then allow the WALsender to read the
sequence data and send it to the subscriber while decoding the new
NOOP record. (b) Similar to the previous idea but instead of WAL
logging a new record directly invokes a decoding callback after
walsender receives a request to shutdown which will allow pgoutput to
read and send required sequences. This approach has a drawback that we
are adding more work at the time of shutdown but note that we already
waits for all the WAL records to be decoded and sent before shutting
down the walsender during shutdown of the node.

At the time of shutdown a) most logical upgrades don't necessarily call
for shutdown b) it will still add to total downtime with large set of
sequences. Incremental option is better as it will not require a shutdown.

I do see a scenario where sequence of events can lead to loss of sequence
and generate duplicate sequence values, if subscriber starts consuming
sequences while publisher is also consuming them. In such cases, subscriber
shall not be allowed sequence consumption.

--
Kind Regards,
Yogesh Sharma
Open Source Enthusiast and Advocate
PostgreSQL Contributors Team @ RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com

#6Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#4)
Re: Logical Replication of sequences

On Tue, Jun 4, 2024 at 7:40 PM Ashutosh Bapat
<ashutosh.bapat.oss@gmail.com> wrote:

On Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

3. Replicate published sequences via walsender at the time of shutdown
or incrementally while decoding checkpoint record. The two ways to
achieve this are: (a) WAL log a special NOOP record just before
shutting down checkpointer. Then allow the WALsender to read the
sequence data and send it to the subscriber while decoding the new
NOOP record. (b) Similar to the previous idea but instead of WAL
logging a new record directly invokes a decoding callback after
walsender receives a request to shutdown which will allow pgoutput to
read and send required sequences. This approach has a drawback that we
are adding more work at the time of shutdown but note that we already
waits for all the WAL records to be decoded and sent before shutting
down the walsender during shutdown of the node.

Any other ideas?

In case of primary crash the sequence won't get replicated. That is true even with the previous approach in case walsender is shut down because of a crash, but it is more serious with this approach.

Right, but if we just want to support a major version upgrade scenario
then this should be fine because upgrades require a clean shutdown.

How about periodically sending this information?

Now, if we want to support some sort of failover then probably this
will help. Do you have that use case in mind? If we want to send
periodically then we can do it when decoding checkpoint
(XLOG_CHECKPOINT_ONLINE) or some other periodic WAL record like
running_xacts (XLOG_RUNNING_XACTS).

--
With Regards,
Amit Kapila.

#7Amit Kapila
amit.kapila16@gmail.com
In reply to: Yogesh Sharma (#5)
Re: Logical Replication of sequences

On Tue, Jun 4, 2024 at 8:56 PM Yogesh Sharma
<yogesh.sharma@catprosystems.com> wrote:

On 6/4/24 06:57, Amit Kapila wrote:

2. Provide a command say Alter Subscription ... Replicate Sequences
(or something like that) which users can perform before shutdown of
the publisher node during upgrade. This will allow copying all the
sequences from the publisher node to the subscriber node directly.
Similar to previous approach, this could also be inconvenient for
users.

This is similar to option 1 except that it is a SQL command now.

Right, but I would still prefer a command as it provides clear steps
for the upgrade. Users need to perform (a) Replicate Sequences for a
particular subscription (b) Disable that subscription (c) Perform (a)
and (b) for all the subscriptions corresponding to the publisher we
want to shut down for upgrade.

I agree there are some manual steps involved here but it is advisable
for users to ensure that they have received the required data on the
subscriber before the upgrade of the publisher node, otherwise, they
may not be able to continue replication after the upgrade. For
example, see the "Prepare for publisher upgrades" step in pg_upgrade
docs [1]https://www.postgresql.org/docs/devel/pgupgrade.html.

3. Replicate published sequences via walsender at the time of shutdown
or incrementally while decoding checkpoint record. The two ways to
achieve this are: (a) WAL log a special NOOP record just before
shutting down checkpointer. Then allow the WALsender to read the
sequence data and send it to the subscriber while decoding the new
NOOP record. (b) Similar to the previous idea but instead of WAL
logging a new record directly invokes a decoding callback after
walsender receives a request to shutdown which will allow pgoutput to
read and send required sequences. This approach has a drawback that we
are adding more work at the time of shutdown but note that we already
waits for all the WAL records to be decoded and sent before shutting
down the walsender during shutdown of the node.

At the time of shutdown a) most logical upgrades don't necessarily call
for shutdown

Won't the major version upgrade expect that the node is down? Refer to
step "Stop both servers" in [1]https://www.postgresql.org/docs/devel/pgupgrade.html.

b) it will still add to total downtime with large set of

sequences. Incremental option is better as it will not require a shutdown.

I do see a scenario where sequence of events can lead to loss of sequence
and generate duplicate sequence values, if subscriber starts consuming
sequences while publisher is also consuming them. In such cases, subscriber
shall not be allowed sequence consumption.

It would be fine to not allow subscribers to consume sequences that
are being logically replicated but what about the cases where we
haven't sent the latest values of sequences before the shutdown of the
publisher? In such a case, the publisher would have already consumed
some values that wouldn't have been sent to the subscriber and now
when the publisher is down then even if we re-allow the sequence
values to be consumed from the subscriber, it can lead to duplicate
values.

[1]: https://www.postgresql.org/docs/devel/pgupgrade.html

--
With Regards,
Amit Kapila.

#8Peter Eisentraut
peter@eisentraut.org
In reply to: Amit Kapila (#1)
Re: Logical Replication of sequences

On 04.06.24 12:57, Amit Kapila wrote:

2. Provide a command say Alter Subscription ... Replicate Sequences
(or something like that) which users can perform before shutdown of
the publisher node during upgrade. This will allow copying all the
sequences from the publisher node to the subscriber node directly.
Similar to previous approach, this could also be inconvenient for
users.

I would start with this. In any case, you're going to need to write
code to collect all the sequence values, send them over some protocol,
apply them on the subscriber. The easiest way to start is to trigger
that manually. Then later you can add other ways to trigger it, either
by timer or around shutdown, or whatever other ideas there might be.

#9Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#7)
Re: Logical Replication of sequences

On Wed, Jun 5, 2024 at 9:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jun 4, 2024 at 8:56 PM Yogesh Sharma
<yogesh.sharma@catprosystems.com> wrote:

On 6/4/24 06:57, Amit Kapila wrote:

2. Provide a command say Alter Subscription ... Replicate Sequences
(or something like that) which users can perform before shutdown of
the publisher node during upgrade. This will allow copying all the
sequences from the publisher node to the subscriber node directly.
Similar to previous approach, this could also be inconvenient for
users.

This is similar to option 1 except that it is a SQL command now.

Right, but I would still prefer a command as it provides clear steps
for the upgrade. Users need to perform (a) Replicate Sequences for a
particular subscription (b) Disable that subscription (c) Perform (a)
and (b) for all the subscriptions corresponding to the publisher we
want to shut down for upgrade.

Another advantage of this approach over just a plain tool to copy all
sequences before upgrade is that here we can have the facility to copy
just the required sequences. I mean the set sequences that the user
has specified as part of the publication.

--
With Regards,
Amit Kapila.

#10Bharath Rupireddy
bharath.rupireddyforpostgres@gmail.com
In reply to: Amit Kapila (#3)
Re: Logical Replication of sequences

Hi,

On Tue, Jun 4, 2024 at 5:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Even if we decode it periodically (say each time we decode the
checkpoint record) then also we need to send the entire set of
sequences at shutdown. This is because the sequences may have changed
from the last time we sent them.

Agree. How about decoding and sending only the sequences that are
changed from the last time when they were sent? I know it requires a
bit of tracking and more work, but all I'm looking for is to reduce
the amount of work that walsenders need to do during the shutdown.

Having said that, I like the idea of letting the user sync the
sequences via ALTER SUBSCRIPTION command and not weave the logic into
the shutdown checkpoint path. As Peter Eisentraut said here
/messages/by-id/42e5cb35-4aeb-4f58-8091-90619c7c3ecc@eisentraut.org,
this can be a good starting point to get going.

I can imagine a case where there are tens
of thousands of sequences in a production server, and surely decoding
and sending them just during the shutdown can take a lot of time
hampering the overall server uptime.

It is possible but we will send only the sequences that belong to
publications for which walsender is supposed to send the required
data.

Right, but what if all the publication tables can have tens of
thousands of sequences.

Now, we can also imagine providing option 2 (Alter Subscription
... Replicate Sequences) so that users can replicate sequences before
shutdown and then disable the subscriptions so that there won't be a
corresponding walsender.

As stated above, I like this idea to start with.

--
Bharath Rupireddy
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com

#11Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Eisentraut (#8)
Re: Logical Replication of sequences

On Wed, Jun 5, 2024 at 12:51 PM Peter Eisentraut <peter@eisentraut.org> wrote:

On 04.06.24 12:57, Amit Kapila wrote:

2. Provide a command say Alter Subscription ... Replicate Sequences
(or something like that) which users can perform before shutdown of
the publisher node during upgrade. This will allow copying all the
sequences from the publisher node to the subscriber node directly.
Similar to previous approach, this could also be inconvenient for
users.

I would start with this. In any case, you're going to need to write
code to collect all the sequence values, send them over some protocol,
apply them on the subscriber. The easiest way to start is to trigger
that manually. Then later you can add other ways to trigger it, either
by timer or around shutdown, or whatever other ideas there might be.

Agreed. To achieve this, we can allow sequences to be copied during
the initial CREATE SUBSCRIPTION command similar to what we do for
tables. And then later by new/existing command, we re-copy the already
existing sequences on the subscriber.

The options for the new command could be:
Alter Subscription ... Refresh Sequences
Alter Subscription ... Replicate Sequences

In the second option, we need to introduce a new keyword Replicate.
Can you think of any better option?

In addition to the above, the command Alter Subscription .. Refresh
Publication will fetch any missing sequences similar to what it does
for tables.

Thoughts?

--
With Regards,
Amit Kapila.

#12Ashutosh Bapat
ashutosh.bapat.oss@gmail.com
In reply to: Amit Kapila (#6)
Re: Logical Replication of sequences

On Wed, Jun 5, 2024 at 8:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jun 4, 2024 at 7:40 PM Ashutosh Bapat
<ashutosh.bapat.oss@gmail.com> wrote:

On Tue, Jun 4, 2024 at 4:27 PM Amit Kapila <amit.kapila16@gmail.com>

wrote:

3. Replicate published sequences via walsender at the time of shutdown
or incrementally while decoding checkpoint record. The two ways to
achieve this are: (a) WAL log a special NOOP record just before
shutting down checkpointer. Then allow the WALsender to read the
sequence data and send it to the subscriber while decoding the new
NOOP record. (b) Similar to the previous idea but instead of WAL
logging a new record directly invokes a decoding callback after
walsender receives a request to shutdown which will allow pgoutput to
read and send required sequences. This approach has a drawback that we
are adding more work at the time of shutdown but note that we already
waits for all the WAL records to be decoded and sent before shutting
down the walsender during shutdown of the node.

Any other ideas?

In case of primary crash the sequence won't get replicated. That is true

even with the previous approach in case walsender is shut down because of a
crash, but it is more serious with this approach.

Right, but if we just want to support a major version upgrade scenario
then this should be fine because upgrades require a clean shutdown.

How about periodically sending this information?

Now, if we want to support some sort of failover then probably this
will help. Do you have that use case in mind?

Regular failover was a goal for supporting logical replication of
sequences. That might be more common than major upgrade scenario.

If we want to send
periodically then we can do it when decoding checkpoint
(XLOG_CHECKPOINT_ONLINE) or some other periodic WAL record like
running_xacts (XLOG_RUNNING_XACTS).

Yeah. I am thinking along those lines.

It must be noted, however, that none of those optional make sure that the
replicated sequence's states are consistent with the replicated object
state which use those sequences. E.g. table t1 uses sequence s1. By last
sequence replication, as of time T1, let's say t1 had consumed values upto
vl1 from s1. But later, by time T2, it consumed values upto vl2 which were
not replicated but the changes to t1 by T2 were replicated. If failover
happens at that point, INSERTs on t1 would fail because of duplicate keys
(values between vl1 and vl2). Previous attempt to support logical sequence
replication solved this problem by replicating a future state of sequences
(current value +/- log count). Similarly, if the sequence was ALTERed
between T1 and T2, the state of sequence on replica would be inconsistent
with the state of t1. Failing over at this stage, might end t1 in an
inconsistent state.

--
Best Wishes,
Ashutosh Bapat

#13Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#12)
Re: Logical Replication of sequences

On Wed, Jun 5, 2024 at 6:01 PM Ashutosh Bapat
<ashutosh.bapat.oss@gmail.com> wrote:

On Wed, Jun 5, 2024 at 8:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

How about periodically sending this information?

Now, if we want to support some sort of failover then probably this
will help. Do you have that use case in mind?

Regular failover was a goal for supporting logical replication of sequences. That might be more common than major upgrade scenario.

We can't support regular failovers to subscribers unless we can
replicate/copy slots because the existing nodes connected to the
current publisher/primary would expect that. It should be primarily
useful for major version upgrades at this stage.

If we want to send
periodically then we can do it when decoding checkpoint
(XLOG_CHECKPOINT_ONLINE) or some other periodic WAL record like
running_xacts (XLOG_RUNNING_XACTS).

Yeah. I am thinking along those lines.

It must be noted, however, that none of those optional make sure that the replicated sequence's states are consistent with the replicated object state which use those sequences.

Right, I feel as others are advocating, it seems better to support it
manually via command and then later we can extend it to do at shutdown
or at some regular intervals. If we do that then we should be able to
support major version upgrades and planned switchover.

--
With Regards,
Amit Kapila.

#14Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#7)
Re: Logical Replication of sequences

On Wed, Jun 5, 2024 at 12:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jun 4, 2024 at 8:56 PM Yogesh Sharma
<yogesh.sharma@catprosystems.com> wrote:

On 6/4/24 06:57, Amit Kapila wrote:

2. Provide a command say Alter Subscription ... Replicate Sequences
(or something like that) which users can perform before shutdown of
the publisher node during upgrade. This will allow copying all the
sequences from the publisher node to the subscriber node directly.
Similar to previous approach, this could also be inconvenient for
users.

This is similar to option 1 except that it is a SQL command now.

Right, but I would still prefer a command as it provides clear steps
for the upgrade. Users need to perform (a) Replicate Sequences for a
particular subscription (b) Disable that subscription (c) Perform (a)
and (b) for all the subscriptions corresponding to the publisher we
want to shut down for upgrade.

I agree there are some manual steps involved here but it is advisable
for users to ensure that they have received the required data on the
subscriber before the upgrade of the publisher node, otherwise, they
may not be able to continue replication after the upgrade. For
example, see the "Prepare for publisher upgrades" step in pg_upgrade
docs [1].

3. Replicate published sequences via walsender at the time of shutdown
or incrementally while decoding checkpoint record. The two ways to
achieve this are: (a) WAL log a special NOOP record just before
shutting down checkpointer. Then allow the WALsender to read the
sequence data and send it to the subscriber while decoding the new
NOOP record. (b) Similar to the previous idea but instead of WAL
logging a new record directly invokes a decoding callback after
walsender receives a request to shutdown which will allow pgoutput to
read and send required sequences. This approach has a drawback that we
are adding more work at the time of shutdown but note that we already
waits for all the WAL records to be decoded and sent before shutting
down the walsender during shutdown of the node.

At the time of shutdown a) most logical upgrades don't necessarily call
for shutdown

Won't the major version upgrade expect that the node is down? Refer to
step "Stop both servers" in [1].

I think the idea is that the publisher is the old version and the
subscriber is the new version, and changes generated on the publisher
are replicated to the subscriber via logical replication. And at some
point, we change the application (or a router) settings so that no
more transactions come to the publisher, do the last upgrade
preparation work (e.g. copying the latest sequence values if
requried), and then change the application so that new transactions
come to the subscriber.

I remember the blog post about Knock doing a similar process to
upgrade the clusters with minimal downtime[1]https://knock.app/blog/zero-downtime-postgres-upgrades.

Regards,

[1]: https://knock.app/blog/zero-downtime-postgres-upgrades

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#15Amit Kapila
amit.kapila16@gmail.com
In reply to: Bharath Rupireddy (#10)
Re: Logical Replication of sequences

On Wed, Jun 5, 2024 at 3:17 PM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:

On Tue, Jun 4, 2024 at 5:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Even if we decode it periodically (say each time we decode the
checkpoint record) then also we need to send the entire set of
sequences at shutdown. This is because the sequences may have changed
from the last time we sent them.

Agree. How about decoding and sending only the sequences that are
changed from the last time when they were sent? I know it requires a
bit of tracking and more work, but all I'm looking for is to reduce
the amount of work that walsenders need to do during the shutdown.

I see your point but going towards tracking the changed sequences
sounds like moving towards what we do for incremental backups unless
we can invent some other smart way.

Having said that, I like the idea of letting the user sync the
sequences via ALTER SUBSCRIPTION command and not weave the logic into
the shutdown checkpoint path. As Peter Eisentraut said here
/messages/by-id/42e5cb35-4aeb-4f58-8091-90619c7c3ecc@eisentraut.org,
this can be a good starting point to get going.

Agreed.

I can imagine a case where there are tens
of thousands of sequences in a production server, and surely decoding
and sending them just during the shutdown can take a lot of time
hampering the overall server uptime.

It is possible but we will send only the sequences that belong to
publications for which walsender is supposed to send the required
data.

Right, but what if all the publication tables can have tens of
thousands of sequences.

In such cases we have no option but to send all the sequences.

Now, we can also imagine providing option 2 (Alter Subscription
... Replicate Sequences) so that users can replicate sequences before
shutdown and then disable the subscriptions so that there won't be a
corresponding walsender.

As stated above, I like this idea to start with.

+1.

--
With Regards,
Amit Kapila.

#16Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#11)
Re: Logical Replication of sequences

On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 5, 2024 at 12:51 PM Peter Eisentraut <peter@eisentraut.org> wrote:

On 04.06.24 12:57, Amit Kapila wrote:

2. Provide a command say Alter Subscription ... Replicate Sequences
(or something like that) which users can perform before shutdown of
the publisher node during upgrade. This will allow copying all the
sequences from the publisher node to the subscriber node directly.
Similar to previous approach, this could also be inconvenient for
users.

I would start with this. In any case, you're going to need to write
code to collect all the sequence values, send them over some protocol,
apply them on the subscriber. The easiest way to start is to trigger
that manually. Then later you can add other ways to trigger it, either
by timer or around shutdown, or whatever other ideas there might be.

Agreed.

+1

To achieve this, we can allow sequences to be copied during
the initial CREATE SUBSCRIPTION command similar to what we do for
tables. And then later by new/existing command, we re-copy the already
existing sequences on the subscriber.

The options for the new command could be:
Alter Subscription ... Refresh Sequences
Alter Subscription ... Replicate Sequences

In the second option, we need to introduce a new keyword Replicate.
Can you think of any better option?

Another idea is doing that using options. For example,

For initial sequences synchronization:

CREATE SUBSCRIPTION ... WITH (copy_sequence = true);

For re-copy (or update) sequences:

ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);

In addition to the above, the command Alter Subscription .. Refresh
Publication will fetch any missing sequences similar to what it does
for tables.

On the subscriber side, do we need to track which sequences are
created via CREATE/ALTER SUBSCRIPTION?

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#17Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#14)
Re: Logical Replication of sequences

On Thu, Jun 6, 2024 at 9:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Jun 5, 2024 at 12:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jun 4, 2024 at 8:56 PM Yogesh Sharma
<yogesh.sharma@catprosystems.com> wrote:

On 6/4/24 06:57, Amit Kapila wrote:

2. Provide a command say Alter Subscription ... Replicate Sequences
(or something like that) which users can perform before shutdown of
the publisher node during upgrade. This will allow copying all the
sequences from the publisher node to the subscriber node directly.
Similar to previous approach, this could also be inconvenient for
users.

This is similar to option 1 except that it is a SQL command now.

Right, but I would still prefer a command as it provides clear steps
for the upgrade. Users need to perform (a) Replicate Sequences for a
particular subscription (b) Disable that subscription (c) Perform (a)
and (b) for all the subscriptions corresponding to the publisher we
want to shut down for upgrade.

I agree there are some manual steps involved here but it is advisable
for users to ensure that they have received the required data on the
subscriber before the upgrade of the publisher node, otherwise, they
may not be able to continue replication after the upgrade. For
example, see the "Prepare for publisher upgrades" step in pg_upgrade
docs [1].

3. Replicate published sequences via walsender at the time of shutdown
or incrementally while decoding checkpoint record. The two ways to
achieve this are: (a) WAL log a special NOOP record just before
shutting down checkpointer. Then allow the WALsender to read the
sequence data and send it to the subscriber while decoding the new
NOOP record. (b) Similar to the previous idea but instead of WAL
logging a new record directly invokes a decoding callback after
walsender receives a request to shutdown which will allow pgoutput to
read and send required sequences. This approach has a drawback that we
are adding more work at the time of shutdown but note that we already
waits for all the WAL records to be decoded and sent before shutting
down the walsender during shutdown of the node.

At the time of shutdown a) most logical upgrades don't necessarily call
for shutdown

Won't the major version upgrade expect that the node is down? Refer to
step "Stop both servers" in [1].

I think the idea is that the publisher is the old version and the
subscriber is the new version, and changes generated on the publisher
are replicated to the subscriber via logical replication. And at some
point, we change the application (or a router) settings so that no
more transactions come to the publisher, do the last upgrade
preparation work (e.g. copying the latest sequence values if
requried), and then change the application so that new transactions
come to the subscriber.

Okay, thanks for sharing the exact steps. If one has to follow that
path then sending incrementally (at checkpoint WAL or other times)
won't work because we want to ensure that the sequences are up-to-date
before the application starts using the new database. To do that in a
bullet-proof way, one has to copy/replicate sequences during the
requests to the new database are paused (Reference from the blog you
shared: For the first second after flipping the flag, our application
artificially paused any new database requests for one second.).
Currently, they are using some guesswork to replicate sequences that
require manual verification and more manual work for each sequence.
The new command (Alter Subscription ... Replicate Sequence) should
ease their procedure and can do things where they would require no or
very less verification.

I remember the blog post about Knock doing a similar process to
upgrade the clusters with minimal downtime[1].

Thanks for sharing the blog post.

--
With Regards,
Amit Kapila.

#18Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#16)
Re: Logical Replication of sequences

On Thu, Jun 6, 2024 at 11:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

To achieve this, we can allow sequences to be copied during
the initial CREATE SUBSCRIPTION command similar to what we do for
tables. And then later by new/existing command, we re-copy the already
existing sequences on the subscriber.

The options for the new command could be:
Alter Subscription ... Refresh Sequences
Alter Subscription ... Replicate Sequences

In the second option, we need to introduce a new keyword Replicate.
Can you think of any better option?

Another idea is doing that using options. For example,

For initial sequences synchronization:

CREATE SUBSCRIPTION ... WITH (copy_sequence = true);

How will it interact with the existing copy_data option? So copy_data
will become equivalent to copy_table_data, right?

For re-copy (or update) sequences:

ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);

Similar to the previous point it can be slightly confusing w.r.t
copy_data. And would copy_sequence here mean that it would copy
sequence values of both pre-existing and newly added sequences, if so,
that would make it behave differently than copy_data? The other
possibility in this direction would be to introduce an option like
replicate_all_sequences/copy_all_sequences which indicates a copy of
both pre-existing and new sequences, if any.

If we want to go in the direction of having an option such as
copy_(all)_sequences then do you think specifying that copy_data is
just for tables in the docs would be sufficient? I am afraid that it
would be confusing for users.

In addition to the above, the command Alter Subscription .. Refresh
Publication will fetch any missing sequences similar to what it does
for tables.

On the subscriber side, do we need to track which sequences are
created via CREATE/ALTER SUBSCRIPTION?

I think so unless we find some other way to know at refresh
publication time which all new sequences need to be part of the
subscription. What should be the behavior w.r.t sequences when the
user performs ALTER SUBSCRIPTION ... REFRESH PUBLICATION? I was
thinking similar to tables, it should fetch any missing sequence
information from the publisher.

--
With Regards,
Amit Kapila.

#19Ashutosh Bapat
ashutosh.bapat.oss@gmail.com
In reply to: Amit Kapila (#13)
Re: Logical Replication of sequences

On Thu, Jun 6, 2024 at 9:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 5, 2024 at 6:01 PM Ashutosh Bapat
<ashutosh.bapat.oss@gmail.com> wrote:

On Wed, Jun 5, 2024 at 8:45 AM Amit Kapila <amit.kapila16@gmail.com>

wrote:

How about periodically sending this information?

Now, if we want to support some sort of failover then probably this
will help. Do you have that use case in mind?

Regular failover was a goal for supporting logical replication of

sequences. That might be more common than major upgrade scenario.

We can't support regular failovers to subscribers unless we can
replicate/copy slots because the existing nodes connected to the
current publisher/primary would expect that. It should be primarily
useful for major version upgrades at this stage.

We don't want to design it in a way that requires major rework when we are
able to copy slots and then support regular failovers. That's when the
consistency between a sequence and the table using it would be a must. So
it's better that we take that into consideration now.

--
Best Wishes,
Ashutosh Bapat

#20Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#19)
Re: Logical Replication of sequences

On Thu, Jun 6, 2024 at 3:44 PM Ashutosh Bapat
<ashutosh.bapat.oss@gmail.com> wrote:

On Thu, Jun 6, 2024 at 9:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 5, 2024 at 6:01 PM Ashutosh Bapat
<ashutosh.bapat.oss@gmail.com> wrote:

On Wed, Jun 5, 2024 at 8:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

How about periodically sending this information?

Now, if we want to support some sort of failover then probably this
will help. Do you have that use case in mind?

Regular failover was a goal for supporting logical replication of sequences. That might be more common than major upgrade scenario.

We can't support regular failovers to subscribers unless we can
replicate/copy slots because the existing nodes connected to the
current publisher/primary would expect that. It should be primarily
useful for major version upgrades at this stage.

We don't want to design it in a way that requires major rework when we are able to copy slots and then support regular failover.

I don't think we can just copy slots like we do for standbys. The
slots would require WAL locations to continue, so not sure if we can
make it work for failover for subscribers.

That's when the consistency between a sequence and the table using it
would be a must. So it's better that we take that into consideration
now.

With the ideas being discussed here, I could only see the use case of
a major version upgrade or planned switchover to work. If we come up
with any other agreeable way that is better than this then we can
consider the same.

--
With Regards,
Amit Kapila.

#21Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#15)
Re: Logical Replication of sequences

On Thu, Jun 6, 2024 at 9:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 5, 2024 at 3:17 PM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:

On Tue, Jun 4, 2024 at 5:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Even if we decode it periodically (say each time we decode the
checkpoint record) then also we need to send the entire set of
sequences at shutdown. This is because the sequences may have changed
from the last time we sent them.

Agree. How about decoding and sending only the sequences that are
changed from the last time when they were sent? I know it requires a
bit of tracking and more work, but all I'm looking for is to reduce
the amount of work that walsenders need to do during the shutdown.

I see your point but going towards tracking the changed sequences
sounds like moving towards what we do for incremental backups unless
we can invent some other smart way.

Yes, we would need an entirely new infrastructure to track the
sequence change since the last sync. We can only determine this from
WAL, and relying on it would somehow bring us back to the approach we
were trying to achieve with logical decoding of sequences patch.

Having said that, I like the idea of letting the user sync the
sequences via ALTER SUBSCRIPTION command and not weave the logic into
the shutdown checkpoint path. As Peter Eisentraut said here
/messages/by-id/42e5cb35-4aeb-4f58-8091-90619c7c3ecc@eisentraut.org,
this can be a good starting point to get going.

Agreed.

+1

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#22Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#18)
Re: Logical Replication of sequences

On Thu, Jun 6, 2024 at 6:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Jun 6, 2024 at 11:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

To achieve this, we can allow sequences to be copied during
the initial CREATE SUBSCRIPTION command similar to what we do for
tables. And then later by new/existing command, we re-copy the already
existing sequences on the subscriber.

The options for the new command could be:
Alter Subscription ... Refresh Sequences
Alter Subscription ... Replicate Sequences

In the second option, we need to introduce a new keyword Replicate.
Can you think of any better option?

Another idea is doing that using options. For example,

For initial sequences synchronization:

CREATE SUBSCRIPTION ... WITH (copy_sequence = true);

How will it interact with the existing copy_data option? So copy_data
will become equivalent to copy_table_data, right?

Right.

For re-copy (or update) sequences:

ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);

Similar to the previous point it can be slightly confusing w.r.t
copy_data. And would copy_sequence here mean that it would copy
sequence values of both pre-existing and newly added sequences, if so,
that would make it behave differently than copy_data? The other
possibility in this direction would be to introduce an option like
replicate_all_sequences/copy_all_sequences which indicates a copy of
both pre-existing and new sequences, if any.

Copying sequence data works differently than replicating table data
(initial data copy and logical replication). So I thought the
copy_sequence option (or whatever better name) always does both
updating pre-existing sequences and adding new sequences. REFRESH
PUBLICATION updates the tables to be subscribed, so we also update or
add sequences associated to these tables.

If we want to go in the direction of having an option such as
copy_(all)_sequences then do you think specifying that copy_data is
just for tables in the docs would be sufficient? I am afraid that it
would be confusing for users.

I see your point. But I guess it would not be very problematic as it
doesn't break the current behavior and copy_(all)_sequences is
primarily for upgrade use cases.

In addition to the above, the command Alter Subscription .. Refresh
Publication will fetch any missing sequences similar to what it does
for tables.

On the subscriber side, do we need to track which sequences are
created via CREATE/ALTER SUBSCRIPTION?

I think so unless we find some other way to know at refresh
publication time which all new sequences need to be part of the
subscription. What should be the behavior w.r.t sequences when the
user performs ALTER SUBSCRIPTION ... REFRESH PUBLICATION? I was
thinking similar to tables, it should fetch any missing sequence
information from the publisher.

It seems to make sense to me. But I have one question: do we want to
support replicating sequences that are not associated with any tables?
if yes, what if we refresh two different subscriptions that subscribe
to different tables on the same database? On the other hand, if no
(i.e. replicating only sequences owned by tables), can we know which
sequences to replicate by checking the subscribed tables?

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#23Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#22)
Re: Logical Replication of sequences

On Fri, Jun 7, 2024 at 7:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Thu, Jun 6, 2024 at 6:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Jun 6, 2024 at 11:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

To achieve this, we can allow sequences to be copied during
the initial CREATE SUBSCRIPTION command similar to what we do for
tables. And then later by new/existing command, we re-copy the already
existing sequences on the subscriber.

The options for the new command could be:
Alter Subscription ... Refresh Sequences
Alter Subscription ... Replicate Sequences

In the second option, we need to introduce a new keyword Replicate.
Can you think of any better option?

Another idea is doing that using options. For example,

For initial sequences synchronization:

CREATE SUBSCRIPTION ... WITH (copy_sequence = true);

How will it interact with the existing copy_data option? So copy_data
will become equivalent to copy_table_data, right?

Right.

For re-copy (or update) sequences:

ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);

Similar to the previous point it can be slightly confusing w.r.t
copy_data. And would copy_sequence here mean that it would copy
sequence values of both pre-existing and newly added sequences, if so,
that would make it behave differently than copy_data? The other
possibility in this direction would be to introduce an option like
replicate_all_sequences/copy_all_sequences which indicates a copy of
both pre-existing and new sequences, if any.

Copying sequence data works differently than replicating table data
(initial data copy and logical replication). So I thought the
copy_sequence option (or whatever better name) always does both
updating pre-existing sequences and adding new sequences. REFRESH
PUBLICATION updates the tables to be subscribed, so we also update or
add sequences associated to these tables.

Are you imagining the behavior for sequences associated with tables
differently than the ones defined by the CREATE SEQUENCE .. command? I
was thinking that users would associate sequences with publications
similar to what we do for tables for both cases. For example, they
need to explicitly mention the sequences they want to replicate by
commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE
PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR
SEQUENCES IN SCHEMA sch1;

In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1
should copy both the explicitly defined sequences and sequences
defined with the tables. Do you think a different variant for just
copying sequences implicitly associated with tables (say for identity
columns)?

In addition to the above, the command Alter Subscription .. Refresh
Publication will fetch any missing sequences similar to what it does
for tables.

On the subscriber side, do we need to track which sequences are
created via CREATE/ALTER SUBSCRIPTION?

I think so unless we find some other way to know at refresh
publication time which all new sequences need to be part of the
subscription. What should be the behavior w.r.t sequences when the
user performs ALTER SUBSCRIPTION ... REFRESH PUBLICATION? I was
thinking similar to tables, it should fetch any missing sequence
information from the publisher.

It seems to make sense to me. But I have one question: do we want to
support replicating sequences that are not associated with any tables?

Yes, unless we see a problem with it.

if yes, what if we refresh two different subscriptions that subscribe
to different tables on the same database?

What problem do you see with it?

On the other hand, if no

(i.e. replicating only sequences owned by tables), can we know which
sequences to replicate by checking the subscribed tables?

Sorry, I didn't understand your question. Can you please try to
explain in more words or use some examples?

--
With Regards,
Amit Kapila.

#24vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#9)
1 attachment(s)
Re: Logical Replication of sequences

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 5, 2024 at 9:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jun 4, 2024 at 8:56 PM Yogesh Sharma
<yogesh.sharma@catprosystems.com> wrote:

On 6/4/24 06:57, Amit Kapila wrote:

2. Provide a command say Alter Subscription ... Replicate Sequences
(or something like that) which users can perform before shutdown of
the publisher node during upgrade. This will allow copying all the
sequences from the publisher node to the subscriber node directly.
Similar to previous approach, this could also be inconvenient for
users.

This is similar to option 1 except that it is a SQL command now.

Right, but I would still prefer a command as it provides clear steps
for the upgrade. Users need to perform (a) Replicate Sequences for a
particular subscription (b) Disable that subscription (c) Perform (a)
and (b) for all the subscriptions corresponding to the publisher we
want to shut down for upgrade.

Another advantage of this approach over just a plain tool to copy all
sequences before upgrade is that here we can have the facility to copy
just the required sequences. I mean the set sequences that the user
has specified as part of the publication.

Here is a WIP patch to handle synchronizing the sequence during
create/alter subscription. The following changes were made for it:
Subscriber modifications:
Enable sequence synchronization during subscription creation or
alteration using the following syntax:
CREATE SUBSCRIPTION ... WITH (sequences=true);
When a subscription is created with the sequence option enabled, the
sequence list from the specified publications in the subscription will
be retrieved from the publisher. Each sequence's data will then be
copied from the remote publisher sequence to the local subscriber
sequence by using a wal receiver connection. Since all of the sequence
updating is done within a single transaction, if any errors occur
during the copying process, the entire transaction will be rolled
back.

To refresh sequences, use the syntax:
ALTER SUBSCRIPTION REFRESH SEQUENCES;
During sequence refresh, the sequence list is updated by removing
stale sequences and adding any missing sequences. The updated sequence
list is then re-synchronized.

A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

I have taken some code changes from Tomas's patch at [1]/messages/by-id/09613730-5ee9-4cc3-82d8-f089be90aa64@enterprisedb.com.
I'll adjust the syntax as needed based on the ongoing discussion at [2]/messages/by-id/CAA4eK1K2X+PaErtGVQPD0k_5XqxjV_Cwg37+-pWsmKFncwc7Wg@mail.gmail.com.

[1]: /messages/by-id/09613730-5ee9-4cc3-82d8-f089be90aa64@enterprisedb.com
[2]: /messages/by-id/CAA4eK1K2X+PaErtGVQPD0k_5XqxjV_Cwg37+-pWsmKFncwc7Wg@mail.gmail.com

Regards,
Vignesh

Attachments:

v20240608-0001-Enable-sequence-synchronization-when-creat.patchapplication/octet-stream; name=v20240608-0001-Enable-sequence-synchronization-when-creat.patchDownload
From b48b763dd22188fc0b1b7a2933d430b7719d090d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 7 Jun 2024 21:12:12 +0530
Subject: [PATCH v20240608] Enable sequence synchronization when creating or
 modifying subscriptions.

Subscriber modifications:
Enable sequence synchronization during subscription creation or alteration
using the following syntax:
CREATE SUBSCRIPTION ... WITH (sequences=true);
When a subscription is created with the sequence option enabled, the sequence
list from the specified publications in the subscription will be retrieved
from the publisher. Each sequence's data will then be copied from the remote
publisher sequence to the local subscriber sequence. Since all of these
operations are done within a single transaction, if any errors occur during
the copying process, the entire transaction will be rolled back.

To refresh sequences, use the syntax:
ALTER SUBSCRIPTION REFRESH SEQUENCES;
During sequence refresh, the sequence list is updated by removing stale
sequences and adding any missing sequences. The updated sequence list is
then re-synchronized.

A new catalog table, pg_subscription_seq, has been introduced for mapping
subscriptions to sequences. Additionally, the sequence LSN (Log Sequence Number)
is stored, facilitating determination of sequence changes occurring before or
after the returned sequence state.

Publisher changes:
The syntax and behavior mostly mimics handling of tables, i.e. a
publication may be defined as FOR ALL SEQUENCES (replicating all
sequences in a database), FOR ALL SEQUENCES IN SCHEMA (replicating
all sequences in a particular schema) or individual sequences.

To publish sequence modifications, the publication has to include
'sequence' action. The protocol is extended with a new message,
describing sequence increments.

A new system view pg_publication_sequences lists all the sequences
added to a publication, both directly and indirectly. Various psql
commands (\d and \dRp) are improved to also display publications
including a given sequence, or sequences included in a publication.
---
 doc/src/sgml/catalogs.sgml                    |   83 ++
 doc/src/sgml/ref/alter_publication.sgml       |   25 +-
 doc/src/sgml/ref/alter_subscription.sgml      |   20 +-
 doc/src/sgml/ref/create_publication.sgml      |   51 +-
 doc/src/sgml/ref/create_subscription.sgml     |   13 +
 doc/src/sgml/system-views.sgml                |   71 ++
 src/backend/catalog/objectaddress.c           |   45 +-
 src/backend/catalog/pg_publication.c          |  336 +++++-
 src/backend/catalog/pg_subscription.c         |   56 +
 src/backend/catalog/system_views.sql          |   10 +
 src/backend/commands/publicationcmds.c        |  422 +++++--
 src/backend/commands/sequence.c               |  156 ++-
 src/backend/commands/subscriptioncmds.c       |  174 ++-
 src/backend/executor/execReplication.c        |    4 +-
 src/backend/parser/gram.y                     |   82 +-
 src/backend/replication/logical/Makefile      |    1 +
 src/backend/replication/logical/meson.build   |    1 +
 .../replication/logical/sequencesync.c        |  357 ++++++
 src/backend/replication/pgoutput/pgoutput.c   |   11 +-
 src/backend/utils/cache/relcache.c            |   28 +-
 src/bin/pg_dump/pg_dump.c                     |   64 +-
 src/bin/pg_dump/pg_dump.h                     |    3 +
 src/bin/pg_dump/t/002_pg_dump.pl              |   47 +-
 src/bin/psql/describe.c                       |  297 +++--
 src/bin/psql/tab-complete.c                   |   43 +-
 src/include/catalog/Makefile                  |    3 +-
 src/include/catalog/meson.build               |    1 +
 src/include/catalog/pg_proc.dat               |   13 +
 src/include/catalog/pg_publication.h          |   26 +-
 .../catalog/pg_publication_namespace.h        |   12 +-
 src/include/catalog/pg_subscription.h         |    4 +
 src/include/catalog/pg_subscription_seq.h     |   67 ++
 src/include/commands/sequence.h               |    1 +
 src/include/nodes/parsenodes.h                |   11 +-
 src/test/regress/expected/object_address.out  |   20 +-
 src/test/regress/expected/oidjoins.out        |    2 +
 src/test/regress/expected/psql.out            |    6 +-
 src/test/regress/expected/publication.out     | 1055 +++++++++++++----
 src/test/regress/expected/rules.out           |    8 +
 src/test/regress/sql/object_address.sql       |    5 +-
 src/test/regress/sql/publication.sql          |  231 +++-
 41 files changed, 3369 insertions(+), 496 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/include/catalog/pg_subscription_seq.h

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 15f6255d86..38b43d57e1 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -320,6 +320,11 @@
       <entry>relation state for subscriptions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="catalog-pg-subscription-seq"><structname>pg_subscription_seq</structname></link></entry>
+      <entry>sequence information for subscriptions</entry>
+     </row>
+
      <row>
       <entry><link linkend="catalog-pg-tablespace"><structname>pg_tablespace</structname></link></entry>
       <entry>tablespaces within this database cluster</entry>
@@ -6449,6 +6454,16 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        Reference to schema
       </para></entry>
      </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pntype</structfield> <type>char</type>
+      </para>
+      <para>
+       Determines which object type is included from this schema.
+       <literal>t</literal> for tables, <literal>s</literal> for sequences
+      </para></entry>
+     </row>
     </tbody>
    </tgroup>
   </table>
@@ -8181,6 +8196,74 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </table>
  </sect1>
 
+<sect1 id="catalog-pg-subscription-seq">
+  <title><structname>pg_subscription_seq</structname></title>
+
+  <indexterm zone="catalog-pg-subscription-seq">
+   <primary>pg_subscription_seq</primary>
+  </indexterm>
+
+  <para>
+   The catalog <structname>pg_subscription_seq</structname> contains the
+   information for each synchronized sequence in each subscription.  This is a
+   many-to-many mapping.
+  </para>
+
+  <para>
+   This catalog only contains sequence known to the subscription after running
+   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
+   SEQUENCES</command></link>.
+  </para>
+
+  <table>
+   <title><structname>pg_subscription_seq</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sssubid</structfield> <type>oid</type>
+       (references <link linkend="catalog-pg-subscription"><structname>pg_subscription</structname></link>.<structfield>oid</structfield>)
+      </para>
+      <para>
+       Reference to subscription
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>ssseqid</structfield> <type>oid</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
+      </para>
+      <para>
+       Reference to sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sssublsn</structfield> <type>pg_lsn</type>
+      </para>
+      <para>
+       Remote LSN of the sequence for synchronization coordination.
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="catalog-pg-tablespace">
   <title><structname>pg_tablespace</structname></title>
 
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 44ae7e0e87..8caf2135fc 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+    SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
 </synopsis>
  </refsynopsisdiv>
 
@@ -44,13 +46,13 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
   </para>
 
   <para>
-   The first three variants change which tables/schemas are part of the
-   publication.  The <literal>SET</literal> clause will replace the list of
-   tables/schemas in the publication with the specified list; the existing
-   tables/schemas that were present in the publication will be removed.  The
-   <literal>ADD</literal> and <literal>DROP</literal> clauses will add and
-   remove one or more tables/schemas from the publication.  Note that adding
-   tables/schemas to a publication that is already subscribed to will require an
+   The first three variants change which objects (tables, sequences or schemas)
+   are part of the publication.  The <literal>SET</literal> clause will replace
+   the list of objects in the publication with the specified list; the existing
+   objects that were present in the publication will be removed.
+   The <literal>ADD</literal> and <literal>DROP</literal> clauses will add and
+   remove one or more objects from the publication.  Note that adding objects
+   to a publication that is already subscribed to will require an
    <link linkend="sql-altersubscription-params-refresh-publication">
    <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> action on the
    subscribing side in order to become effective. Note also that
@@ -138,6 +140,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
     </listitem>
    </varlistentry>
 
+   <varlistentry>
+    <term><replaceable class="parameter">sequence_name</replaceable></term>
+    <listitem>
+     <para>
+      Name of an existing sequence.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry>
     <term><replaceable class="parameter">schema_name</replaceable></term>
     <listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 476f195622..d423128541 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -27,6 +27,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SKIP ( <replaceable class="parameter">skip_option</replaceable> = <replaceable class="parameter">value</replaceable> )
@@ -153,8 +154,8 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <listitem>
      <para>
       Fetch missing table information from publisher.  This will start
-      replication of tables that were added to the subscribed-to publications
-      since <link linkend="sql-createsubscription">
+      replication of tables that were added to the subscribed-to
+      publications since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
@@ -194,6 +195,16 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequences information from publisher and re-synchronize the
+      sequence data from the publisher.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
@@ -228,8 +239,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       <link linkend="sql-createsubscription-params-with-disable-on-error"><literal>disable_on_error</literal></link>,
       <link linkend="sql-createsubscription-params-with-password-required"><literal>password_required</literal></link>,
       <link linkend="sql-createsubscription-params-with-run-as-owner"><literal>run_as_owner</literal></link>,
-      <link linkend="sql-createsubscription-params-with-origin"><literal>origin</literal></link>, and
-      <link linkend="sql-createsubscription-params-with-failover"><literal>failover</literal></link>.
+      <link linkend="sql-createsubscription-params-with-origin"><literal>origin</literal></link>,
+      <link linkend="sql-createsubscription-params-with-failover"><literal>failover</literal></link>, and
+      <link linkend="sql-createsubscription-params-with-sequences"><literal>sequences</literal></link>.
       Only a superuser can set <literal>password_required = false</literal>.
      </para>
 
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..ba578eb9aa 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,21 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+    SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
 </synopsis>
  </refsynopsisdiv>
 
@@ -117,22 +124,39 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-for-sequence">
+    <term><literal>FOR SEQUENCE</literal></term>
+    <listitem>
+     <para>
+      Specifies a list of sequences to add to the publication.
+     </para>
+
+     <para>
+      Specifying a sequence that is part of a schema specified by <literal>FOR
+      ALL SEQUENCES IN SCHEMA</literal> is not supported.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-for-all-tables">
     <term><literal>FOR ALL TABLES</literal></term>
+    <term><literal>FOR ALL SEQUENCES</literal></term>
     <listitem>
      <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
+      Marks the publication as one that replicates changes for all tables or
+      sequences in the database, including tables created in the future.
      </para>
     </listitem>
    </varlistentry>
 
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
+    <term><literal>FOR SEQUENCES IN SCHEMA</literal></term>
     <listitem>
      <para>
-      Marks the publication as one that replicates changes for all tables in
-      the specified list of schemas, including tables created in the future.
+      Marks the publication as one that replicates changes for all tables or
+      sequences in the specified list of schemas, including tables or sequences
+      created in the future.
      </para>
 
      <para>
@@ -240,10 +264,12 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR SEQUENCE</literal>,
+   <literal>FOR ALL TABLES</literal>, <literal>FOR ALL SEQUENCES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR SEQUENCES IN SCHEMA</literal>
+   are not specified, then the publication starts out with an empty set
+   of tables/sequences.  That is useful if tables, sequences or schemas
+   are to be added later.
   </para>
 
   <para>
@@ -258,9 +284,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   </para>
 
   <para>
-   To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   To add a table or a sequence to a publication, the invoking user must
+   have ownership rights on the object.  The <command>FOR ALL TABLES</command>,
+   <command>FOR ALL SEQUENCES</command>, <command>FOR TABLES IN SCHEMA</command>
+   and <command>FOR SEQUENCES IN SCHEMA</command> clauses require the invoking
    user to be a superuser.
   </para>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..7e1da48da3 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -428,6 +428,19 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry id="sql-createsubscription-params-with-sequences">
+        <term><literal>sequences</literal> (<type>boolean</type>)</term>
+        <listitem>
+         <para>
+          Specifies whether the subscription will request the publisher to
+          synchronize sequences while creating subscription. Note that for
+          sequences to be synchronized, the sequence also has to be added to
+          the publication.
+          The default is <literal>false</literal>.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist></para>
 
     </listitem>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 8c18bea902..5591ffd78a 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2138,6 +2143,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.  Unlike the underlying catalog
+   <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+   this view expands publications defined as <literal>FOR ALL SEQUENCES</literal>
+   and <literal>FOR SEQUENCES IN SCHEMA</literal>, so for such publications
+   there will be a row for each eligible sequence.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c
index 7b536ac6fd..6d68a51cc2 100644
--- a/src/backend/catalog/objectaddress.c
+++ b/src/backend/catalog/objectaddress.c
@@ -1920,12 +1920,14 @@ get_object_address_publication_schema(List *object, bool missing_ok)
 	char	   *pubname;
 	char	   *schemaname;
 	Oid			schemaid;
+	char	   *objtype;
 
 	ObjectAddressSet(address, PublicationNamespaceRelationId, InvalidOid);
 
 	/* Fetch schema name and publication name from input list */
 	schemaname = strVal(linitial(object));
 	pubname = strVal(lsecond(object));
+	objtype = strVal(lthird(object));
 
 	schemaid = get_namespace_oid(schemaname, missing_ok);
 	if (!OidIsValid(schemaid))
@@ -1938,10 +1940,12 @@ get_object_address_publication_schema(List *object, bool missing_ok)
 
 	/* Find the publication schema mapping in syscache */
 	address.objectId =
-		GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+		GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
 						Anum_pg_publication_namespace_oid,
 						ObjectIdGetDatum(schemaid),
-						ObjectIdGetDatum(pub->oid));
+						ObjectIdGetDatum(pub->oid),
+						CharGetDatum(objtype[0]));
+
 	if (!OidIsValid(address.objectId) && !missing_ok)
 		ereport(ERROR,
 				(errcode(ERRCODE_UNDEFINED_OBJECT),
@@ -2216,7 +2220,6 @@ pg_get_object_address(PG_FUNCTION_ARGS)
 	 */
 	switch (type)
 	{
-		case OBJECT_PUBLICATION_NAMESPACE:
 		case OBJECT_USER_MAPPING:
 			if (list_length(name) != 1)
 				ereport(ERROR,
@@ -2249,6 +2252,7 @@ pg_get_object_address(PG_FUNCTION_ARGS)
 						 errmsg("name list length must be at least %d", 3)));
 			/* fall through to check args length */
 			/* FALLTHROUGH */
+		case OBJECT_PUBLICATION_NAMESPACE:
 		case OBJECT_OPERATOR:
 			if (list_length(args) != 2)
 				ereport(ERROR,
@@ -2321,6 +2325,8 @@ pg_get_object_address(PG_FUNCTION_ARGS)
 			objnode = (Node *) list_make2(name, linitial(args));
 			break;
 		case OBJECT_PUBLICATION_NAMESPACE:
+			objnode = (Node *) list_make3(linitial(name), linitial(args), lsecond(args));
+			break;
 		case OBJECT_USER_MAPPING:
 			objnode = (Node *) list_make2(linitial(name), linitial(args));
 			break;
@@ -2824,11 +2830,12 @@ get_catalog_object_by_oid(Relation catalog, AttrNumber oidcol, Oid objectId)
  *
  * Get publication name and schema name from the object address into pubname and
  * nspname. Both pubname and nspname are palloc'd strings which will be freed by
- * the caller.
+ * the caller. The last parameter specifies which object type is included from
+ * the schema.
  */
 static bool
 getPublicationSchemaInfo(const ObjectAddress *object, bool missing_ok,
-						 char **pubname, char **nspname)
+						 char **pubname, char **nspname, char **objtype)
 {
 	HeapTuple	tup;
 	Form_pg_publication_namespace pnform;
@@ -2864,6 +2871,14 @@ getPublicationSchemaInfo(const ObjectAddress *object, bool missing_ok,
 		return false;
 	}
 
+	/*
+	 * The type is always a single character, but we need to pass it as a
+	 * string, so allocate two charaters and set the first one. The second one
+	 * is \0.
+	 */
+	*objtype = palloc0(2);
+	*objtype[0] = pnform->pntype;
+
 	ReleaseSysCache(tup);
 	return true;
 }
@@ -3934,15 +3949,17 @@ getObjectDescription(const ObjectAddress *object, bool missing_ok)
 			{
 				char	   *pubname;
 				char	   *nspname;
+				char	   *objtype;
 
 				if (!getPublicationSchemaInfo(object, missing_ok,
-											  &pubname, &nspname))
+											  &pubname, &nspname, &objtype))
 					break;
 
-				appendStringInfo(&buffer, _("publication of schema %s in publication %s"),
-								 nspname, pubname);
+				appendStringInfo(&buffer, _("publication of schema %s in publication %s type %s"),
+								 nspname, pubname, objtype);
 				pfree(pubname);
 				pfree(nspname);
+				pfree(objtype);
 				break;
 			}
 
@@ -5785,18 +5802,24 @@ getObjectIdentityParts(const ObjectAddress *object,
 			{
 				char	   *pubname;
 				char	   *nspname;
+				char	   *objtype;
 
 				if (!getPublicationSchemaInfo(object, missing_ok, &pubname,
-											  &nspname))
+											  &nspname, &objtype))
 					break;
-				appendStringInfo(&buffer, "%s in publication %s",
-								 nspname, pubname);
+				appendStringInfo(&buffer, "%s in publication %s type %s",
+								 nspname, pubname, objtype);
 
 				if (objargs)
 					*objargs = list_make1(pubname);
 				else
 					pfree(pubname);
 
+				if (objargs)
+					*objargs = lappend(*objargs, objtype);
+				else
+					pfree(objtype);
+
 				if (objname)
 					*objname = list_make1(nspname);
 				else
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..ab57ab6873 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -58,9 +58,10 @@ static void publication_translate_columns(Relation targetrel, List *columns,
 static void
 check_publication_add_relation(Relation targetrel)
 {
-	/* Must be a regular or partitioned table */
+	/* Must be a regular or partitioned table, or a sequence */
 	if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
-		RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+		RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+		RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
 				 errmsg("cannot add relation \"%s\" to publication",
@@ -137,7 +138,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -228,6 +230,52 @@ filter_partitions(List *table_infos)
 	}
 }
 
+/*
+ * Check the character is a valid object type for schema publication.
+ *
+ * This recognizes either 't' for tables or 's' for sequences. Places that
+ * need to handle 'u' for unsupported relkinds need to do that explicitlyl
+ */
+static void
+AssertObjectTypeValid(char objectType)
+{
+#ifdef USE_ASSERT_CHECKING
+	Assert(objectType == PUB_OBJTYPE_SEQUENCE || objectType == PUB_OBJTYPE_TABLE);
+#endif
+}
+
+/*
+ * Determine object type matching a given a relkind value.
+ */
+char
+pub_get_object_type_for_relkind(char relkind)
+{
+	/* sequence maps directly to sequence relkind */
+	if (relkind == RELKIND_SEQUENCE)
+		return PUB_OBJTYPE_SEQUENCE;
+
+	/* for table, we match either regular or partitioned table */
+	if (relkind == RELKIND_RELATION ||
+		relkind == RELKIND_PARTITIONED_TABLE)
+		return PUB_OBJTYPE_TABLE;
+
+	return PUB_OBJTYPE_UNSUPPORTED;
+}
+
+/*
+ * Determine if publication object type matches the relkind.
+ *
+ * Returns true if the relation matches object type replicated by this schema,
+ * false otherwise.
+ */
+static bool
+pub_object_type_matches_relkind(char objectType, char relkind)
+{
+	AssertObjectTypeValid(objectType);
+
+	return (pub_get_object_type_for_relkind(relkind) == objectType);
+}
+
 /*
  * Returns true if any schema is associated with the publication, false if no
  * schema is associated with the publication.
@@ -248,7 +296,7 @@ is_schema_publication(Oid pubid)
 				ObjectIdGetDatum(pubid));
 
 	scan = systable_beginscan(pubschsrel,
-							  PublicationNamespacePnnspidPnpubidIndexId,
+							  PublicationNamespacePnnspidPnpubidPntypeIndexId,
 							  true, NULL, 1, &scankey);
 	tup = systable_getnext(scan);
 	result = HeapTupleIsValid(tup);
@@ -334,7 +382,9 @@ GetTopMostAncestorInPublication(Oid puboid, List *ancestors, int *ancestor_level
 		}
 		else
 		{
-			aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));
+			/* we only search for ancestors of tables, so PUB_OBJTYPE_TABLE */
+			aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor),
+												  PUB_OBJTYPE_TABLE);
 			if (list_member_oid(aschemaPubids, puboid))
 			{
 				topmost_relid = ancestor;
@@ -603,7 +653,7 @@ pub_collist_to_bitmapset(Bitmapset *columns, Datum pubcols, MemoryContext mcxt)
  * Insert new publication / schema mapping.
  */
 ObjectAddress
-publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
+publication_add_schema(Oid pubid, Oid schemaid, char objectType, bool if_not_exists)
 {
 	Relation	rel;
 	HeapTuple	tup;
@@ -615,6 +665,8 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
 	ObjectAddress myself,
 				referenced;
 
+	AssertObjectTypeValid(objectType);
+
 	rel = table_open(PublicationNamespaceRelationId, RowExclusiveLock);
 
 	/*
@@ -622,9 +674,10 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
 	 * duplicates, it's here just to provide nicer error message in common
 	 * case. The real protection is the unique key on the catalog.
 	 */
-	if (SearchSysCacheExists2(PUBLICATIONNAMESPACEMAP,
+	if (SearchSysCacheExists3(PUBLICATIONNAMESPACEMAP,
 							  ObjectIdGetDatum(schemaid),
-							  ObjectIdGetDatum(pubid)))
+							  ObjectIdGetDatum(pubid),
+							  CharGetDatum(objectType)))
 	{
 		table_close(rel, RowExclusiveLock);
 
@@ -650,6 +703,8 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
 		ObjectIdGetDatum(pubid);
 	values[Anum_pg_publication_namespace_pnnspid - 1] =
 		ObjectIdGetDatum(schemaid);
+	values[Anum_pg_publication_namespace_pntype - 1] =
+		CharGetDatum(objectType);
 
 	tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
@@ -675,7 +730,7 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
 	 * publication_add_relation for why we need to consider all the
 	 * partitions.
 	 */
-	schemaRels = GetSchemaPublicationRelations(schemaid,
+	schemaRels = GetSchemaPublicationRelations(schemaid, objectType,
 											   PUBLICATION_PART_ALL);
 	InvalidatePublicationRels(schemaRels);
 
@@ -709,11 +764,14 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE / FOR SEQUENCE publications, the FOR
+ * ALL TABLES / SEQUENCES should use GetAllTablesPublicationRelations()
+ * and GetAllSequencesPublicationRelations().
+ *
+ * XXX pub_partopt only matters for tables, not sequences.
  */
 List *
-GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetPublicationRelations(Oid pubid, char objectType, PublicationPartOpt pub_partopt)
 {
 	List	   *result;
 	Relation	pubrelsrel;
@@ -721,6 +779,8 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	SysScanDesc scan;
 	HeapTuple	tup;
 
+	AssertObjectTypeValid(objectType);
+
 	/* Find all publications associated with the relation. */
 	pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
 
@@ -735,11 +795,29 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	result = NIL;
 	while (HeapTupleIsValid(tup = systable_getnext(scan)))
 	{
+		char		relkind;
 		Form_pg_publication_rel pubrel;
 
 		pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
-		result = GetPubPartitionOptionRelations(result, pub_partopt,
-												pubrel->prrelid);
+		relkind = get_rel_relkind(pubrel->prrelid);
+
+		/*
+		 * If the relkind does not match the requested object type, ignore the
+		 * relation. For example we might be interested only in sequences, so
+		 * we ignore tables.
+		 */
+		if (!pub_object_type_matches_relkind(objectType, relkind))
+			continue;
+
+		/*
+		 * We don't have partitioned sequences, so just add them to the list.
+		 * Otherwise consider adding all child relations, if requested.
+		 */
+		if (relkind == RELKIND_SEQUENCE)
+			result = lappend_oid(result, pubrel->prrelid);
+		else
+			result = GetPubPartitionOptionRelations(result, pub_partopt,
+													pubrel->prrelid);
 	}
 
 	systable_endscan(scan);
@@ -789,6 +867,43 @@ GetAllTablesPublications(void)
 	return result;
 }
 
+/*
+ * Gets list of publication oids for publications marked as FOR ALL SEQUENCES.
+ */
+List *
+GetAllSequencesPublications(void)
+{
+	List	   *result;
+	Relation	rel;
+	ScanKeyData scankey;
+	SysScanDesc scan;
+	HeapTuple	tup;
+
+	/* Find all publications that are marked as for all sequences. */
+	rel = table_open(PublicationRelationId, AccessShareLock);
+
+	ScanKeyInit(&scankey,
+				Anum_pg_publication_puballsequences,
+				BTEqualStrategyNumber, F_BOOLEQ,
+				BoolGetDatum(true));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 1, &scankey);
+
+	result = NIL;
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Oid			oid = ((Form_pg_publication) GETSTRUCT(tup))->oid;
+
+		result = lappend_oid(result, oid);
+	}
+
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	return result;
+}
+
 /*
  * Gets list of all relation published by FOR ALL TABLES publication(s).
  *
@@ -855,28 +970,38 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 /*
  * Gets the list of schema oids for a publication.
  *
- * This should only be used FOR TABLES IN SCHEMA publications.
+ * This should only be used FOR TABLES IN SCHEMA and FOR SEQUENCES IN SCHEMA
+ * publications.
+ *
+ * 'objectType' determines whether to get FOR TABLE or FOR SEQUENCES schemas
  */
 List *
-GetPublicationSchemas(Oid pubid)
+GetPublicationSchemas(Oid pubid, char objectType)
 {
 	List	   *result = NIL;
 	Relation	pubschsrel;
-	ScanKeyData scankey;
+	ScanKeyData scankey[2];
 	SysScanDesc scan;
 	HeapTuple	tup;
 
+	AssertObjectTypeValid(objectType);
+
 	/* Find all schemas associated with the publication */
 	pubschsrel = table_open(PublicationNamespaceRelationId, AccessShareLock);
 
-	ScanKeyInit(&scankey,
+	ScanKeyInit(&scankey[0],
 				Anum_pg_publication_namespace_pnpubid,
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(pubid));
 
+	ScanKeyInit(&scankey[1],
+				Anum_pg_publication_namespace_pntype,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(objectType));
+
 	scan = systable_beginscan(pubschsrel,
-							  PublicationNamespacePnnspidPnpubidIndexId,
-							  true, NULL, 1, &scankey);
+							  PublicationNamespacePnnspidPnpubidPntypeIndexId,
+							  true, NULL, 2, scankey);
 	while (HeapTupleIsValid(tup = systable_getnext(scan)))
 	{
 		Form_pg_publication_namespace pubsch;
@@ -894,14 +1019,26 @@ GetPublicationSchemas(Oid pubid)
 
 /*
  * Gets the list of publication oids associated with a specified schema.
+ *
+ * objectType specifies whether we're looking for schemas including tables or
+ * sequences.
+ *
+ * Note: relcache calls this for all object types, not just tables and sequences.
+ * Which is why we handle the PUB_OBJTYPE_UNSUPPORTED object type too.
  */
 List *
-GetSchemaPublications(Oid schemaid)
+GetSchemaPublications(Oid schemaid, char objectType)
 {
 	List	   *result = NIL;
 	CatCList   *pubschlist;
 	int			i;
 
+	/* unsupported object type */
+	if (objectType == PUB_OBJTYPE_UNSUPPORTED)
+		return result;
+
+	AssertObjectTypeValid(objectType);
+
 	/* Find all publications associated with the schema */
 	pubschlist = SearchSysCacheList1(PUBLICATIONNAMESPACEMAP,
 									 ObjectIdGetDatum(schemaid));
@@ -909,6 +1046,11 @@ GetSchemaPublications(Oid schemaid)
 	{
 		HeapTuple	tup = &pubschlist->members[i]->tuple;
 		Oid			pubid = ((Form_pg_publication_namespace) GETSTRUCT(tup))->pnpubid;
+		char		pntype = ((Form_pg_publication_namespace) GETSTRUCT(tup))->pntype;
+
+		/* Skip schemas publishing a different object type. */
+		if (pntype != objectType)
+			continue;
 
 		result = lappend_oid(result, pubid);
 	}
@@ -920,9 +1062,13 @@ GetSchemaPublications(Oid schemaid)
 
 /*
  * Get the list of publishable relation oids for a specified schema.
+ *
+ * objectType specifies whether this is FOR ALL TABLES IN SCHEMA or FOR ALL
+ * SEQUENCES IN SCHEMA
  */
 List *
-GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
+GetSchemaPublicationRelations(Oid schemaid, char objectType,
+							  PublicationPartOpt pub_partopt)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -931,6 +1077,7 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
 	List	   *result = NIL;
 
 	Assert(OidIsValid(schemaid));
+	AssertObjectTypeValid(objectType);
 
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
@@ -951,9 +1098,17 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
 			continue;
 
 		relkind = get_rel_relkind(relid);
-		if (relkind == RELKIND_RELATION)
-			result = lappend_oid(result, relid);
-		else if (relkind == RELKIND_PARTITIONED_TABLE)
+
+		/* Skip if the relkind does not match FOR ALL TABLES / SEQUENCES. */
+		if (!pub_object_type_matches_relkind(objectType, relkind))
+			continue;
+
+		/*
+		 * If the object is a partitioned table, lookup all the child
+		 * relations (if requested). Otherwise just add the object to the
+		 * list.
+		 */
+		if (relkind == RELKIND_PARTITIONED_TABLE)
 		{
 			List	   *partitionrels = NIL;
 
@@ -966,7 +1121,11 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
 														   pub_partopt,
 														   relForm->oid);
 			result = list_concat_unique_oid(result, partitionrels);
+			continue;
 		}
+
+		/* non-partitioned tables and sequences */
+		result = lappend_oid(result, relid);
 	}
 
 	table_endscan(scan);
@@ -975,28 +1134,68 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
 }
 
 /*
- * Gets the list of all relations published by FOR TABLES IN SCHEMA
- * publication.
+ * Gets the list of all relations published by FOR TABLES IN SCHEMA or
+ * FOR SEQUENCES IN SCHEMA publication.
  */
 List *
-GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetAllSchemaPublicationRelations(Oid pubid, char objectType,
+								 PublicationPartOpt pub_partopt)
 {
 	List	   *result = NIL;
-	List	   *pubschemalist = GetPublicationSchemas(pubid);
+	List	   *pubschemalist = GetPublicationSchemas(pubid, objectType);
 	ListCell   *cell;
 
+	AssertObjectTypeValid(objectType);
+
 	foreach(cell, pubschemalist)
 	{
 		Oid			schemaid = lfirst_oid(cell);
 		List	   *schemaRels = NIL;
 
-		schemaRels = GetSchemaPublicationRelations(schemaid, pub_partopt);
+		schemaRels = GetSchemaPublicationRelations(schemaid, objectType,
+												   pub_partopt);
 		result = list_concat(result, schemaRels);
 	}
 
 	return result;
 }
 
+/*
+ * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,10 +1218,12 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
 	pub->pubactions.pubtruncate = pubform->pubtruncate;
+	pub->pubactions.pubsequence = pubform->pubsequence;
 	pub->pubviaroot = pubform->pubviaroot;
 
 	ReleaseSysCache(tup);
@@ -1103,10 +1304,12 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 						   *schemarelids;
 
 				relids = GetPublicationRelations(pub_elem->oid,
+												 PUB_OBJTYPE_TABLE,
 												 pub_elem->pubviaroot ?
 												 PUBLICATION_PART_ROOT :
 												 PUBLICATION_PART_LEAF);
 				schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,
+																PUB_OBJTYPE_TABLE,
 																pub_elem->pubviaroot ?
 																PUBLICATION_PART_ROOT :
 																PUBLICATION_PART_LEAF);
@@ -1192,9 +1395,10 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 		 * FOR TABLES IN SCHEMA publications.
 		 */
 		if (!pub->alltables &&
-			!SearchSysCacheExists2(PUBLICATIONNAMESPACEMAP,
+			!SearchSysCacheExists3(PUBLICATIONNAMESPACEMAP,
 								   ObjectIdGetDatum(schemaid),
-								   ObjectIdGetDatum(pub->oid)))
+								   ObjectIdGetDatum(pub->oid),
+								   PUB_OBJTYPE_TABLE))
 			pubtuple = SearchSysCacheCopy2(PUBLICATIONRELMAP,
 										   ObjectIdGetDatum(relid),
 										   ObjectIdGetDatum(pub->oid));
@@ -1254,3 +1458,71 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		/*
+		 * Publications support partitioned tables, although all changes are
+		 * replicated using leaf partition identity and schema, so we only
+		 * need those.
+		 */
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+		else
+		{
+			List	   *relids,
+					   *schemarelids;
+
+			relids = GetPublicationRelations(publication->oid,
+											 PUB_OBJTYPE_SEQUENCE,
+											 publication->pubviaroot ?
+											 PUBLICATION_PART_ROOT :
+											 PUBLICATION_PART_LEAF);
+			schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+															PUB_OBJTYPE_SEQUENCE,
+															publication->pubviaroot ?
+															PUBLICATION_PART_ROOT :
+															PUBLICATION_PART_LEAF);
+			sequences = list_concat_unique_oid(relids, schemarelids);
+		}
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..35b02721b5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -21,6 +21,7 @@
 #include "catalog/indexing.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
+#include "catalog/pg_subscription_seq.h"
 #include "catalog/pg_type.h"
 #include "miscadmin.h"
 #include "storage/lmgr.h"
@@ -72,6 +73,7 @@ GetSubscription(Oid subid, bool missing_ok)
 	sub->passwordrequired = subform->subpasswordrequired;
 	sub->runasowner = subform->subrunasowner;
 	sub->failover = subform->subfailover;
+	sub->sequences = subform->subsequences;
 
 	/* Get conninfo */
 	datum = SysCacheGetAttrNotNull(SUBSCRIPTIONOID,
@@ -551,3 +553,57 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 	return res;
 }
+
+
+/*
+ * Get the sequences for the subscription.
+ *
+ * The returned list is palloc'ed in the current memory context.
+ */
+List *
+GetSubscriptionSequences(Oid subid)
+{
+	List	   *res = NIL;
+	Relation	rel;
+	HeapTuple	tup;
+	int			nkeys = 0;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+
+	rel = table_open(SubscriptionSeqRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[nkeys++],
+				Anum_pg_subscription_seq_sssubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, nkeys, skey);
+
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_seq subseq;
+		SubscriptionSeqInfo *seqinfo;
+		Datum		d;
+		bool		isnull;
+
+		subseq = (Form_pg_subscription_seq) GETSTRUCT(tup);
+
+		seqinfo = (SubscriptionSeqInfo *) palloc(sizeof(SubscriptionSeqInfo));
+		seqinfo->seqid = subseq->sssubid;
+		d = SysCacheGetAttr(SUBSCRIPTIONSEQMAP, tup,
+							Anum_pg_subscription_seq_sssublsn, &isnull);
+		if (isnull)
+			seqinfo->lsn = InvalidXLogRecPtr;
+		else
+			seqinfo->lsn = DatumGetLSN(d);
+
+		res = lappend(res, seqinfo);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	return res;
+}
\ No newline at end of file
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 53047cab5f..97ad8ebb01 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -398,6 +398,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..a797f81c62 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -15,6 +15,7 @@
 #include "postgres.h"
 
 #include "access/htup_details.h"
+#include "access/relation.h"
 #include "access/table.h"
 #include "access/xact.h"
 #include "catalog/catalog.h"
@@ -61,15 +62,17 @@ typedef struct rf_context
 	Oid			parentid;		/* relid of the parent relation */
 } rf_context;
 
-static List *OpenTableList(List *tables);
-static void CloseTableList(List *rels);
+static List *OpenRelationList(List *rels, char objectType);
+static void CloseRelationList(List *rels);
 static void LockSchemaList(List *schemalist);
-static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
-								 AlterPublicationStmt *stmt);
-static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
-static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
-								  AlterPublicationStmt *stmt);
-static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static void PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
+									AlterPublicationStmt *stmt);
+static void PublicationDropRelations(Oid pubid, List *rels, bool missing_ok);
+static void PublicationAddSchemas(Oid pubid, List *schemas, char objectType,
+								  bool if_not_exists, AlterPublicationStmt *stmt);
+static void PublicationDropSchemas(Oid pubid, List *schemas, char objectType,
+								   bool missing_ok);
+
 
 
 static void
@@ -90,6 +93,7 @@ parse_publication_options(ParseState *pstate,
 	pubactions->pubupdate = true;
 	pubactions->pubdelete = true;
 	pubactions->pubtruncate = true;
+	pubactions->pubsequence = true;
 	*publish_via_partition_root = false;
 
 	/* Parse options */
@@ -114,6 +118,7 @@ parse_publication_options(ParseState *pstate,
 			pubactions->pubupdate = false;
 			pubactions->pubdelete = false;
 			pubactions->pubtruncate = false;
+			pubactions->pubsequence = false;
 
 			*publish_given = true;
 			publish = defGetString(defel);
@@ -137,6 +142,8 @@ parse_publication_options(ParseState *pstate,
 					pubactions->pubdelete = true;
 				else if (strcmp(publish_opt, "truncate") == 0)
 					pubactions->pubtruncate = true;
+				else if (strcmp(publish_opt, "sequence") == 0)
+					pubactions->pubsequence = true;
 				else
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
@@ -164,7 +171,8 @@ parse_publication_options(ParseState *pstate,
  */
 static void
 ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
-						   List **rels, List **schemas)
+						   List **tables, List **sequences,
+						   List **tables_schemas, List **sequences_schemas)
 {
 	ListCell   *cell;
 	PublicationObjSpec *pubobj;
@@ -182,13 +190,22 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
 		switch (pubobj->pubobjtype)
 		{
 			case PUBLICATIONOBJ_TABLE:
-				*rels = lappend(*rels, pubobj->pubtable);
+				*tables = lappend(*tables, pubobj->pubtable);
+				break;
+			case PUBLICATIONOBJ_SEQUENCE:
+				*sequences = lappend(*sequences, pubobj->pubtable);
 				break;
 			case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
 				schemaid = get_namespace_oid(pubobj->name, false);
 
 				/* Filter out duplicates if user specifies "sch1, sch1" */
-				*schemas = list_append_unique_oid(*schemas, schemaid);
+				*tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+				break;
+			case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+				schemaid = get_namespace_oid(pubobj->name, false);
+
+				/* Filter out duplicates if user specifies "sch1, sch1" */
+				*sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
 				break;
 			case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
 				search_path = fetch_search_path(false);
@@ -201,7 +218,20 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
 				list_free(search_path);
 
 				/* Filter out duplicates if user specifies "sch1, sch1" */
-				*schemas = list_append_unique_oid(*schemas, schemaid);
+				*tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+				break;
+			case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+				search_path = fetch_search_path(false);
+				if (search_path == NIL) /* nothing valid in search_path? */
+					ereport(ERROR,
+							errcode(ERRCODE_UNDEFINED_SCHEMA),
+							errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+				schemaid = linitial_oid(search_path);
+				list_free(search_path);
+
+				/* Filter out duplicates if user specifies "sch1, sch1" */
+				*sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
 				break;
 			default:
 				/* shouldn't happen */
@@ -727,6 +757,7 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 ObjectAddress
 CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 {
+	ListCell   *lc;
 	Relation	rel;
 	ObjectAddress myself;
 	Oid			puboid;
@@ -738,8 +769,27 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	bool		publish_via_partition_root_given;
 	bool		publish_via_partition_root;
 	AclResult	aclresult;
-	List	   *relations = NIL;
-	List	   *schemaidlist = NIL;
+	List	   *tables = NIL;
+	List	   *sequences = NIL;
+	List	   *tables_schemaidlist = NIL;
+	List	   *sequences_schemaidlist = NIL;
+
+	bool		for_all_tables = false;
+	bool		for_all_sequences = false;
+
+	/*
+	 * Translate the list of object types (represented by strings) to bool
+	 * flags.
+	 */
+	foreach(lc, stmt->for_all_objects)
+	{
+		char	   *val = strVal(lfirst(lc));
+
+		if (strcmp(val, "tables") == 0)
+			for_all_tables = true;
+		else if (strcmp(val, "sequences") == 0)
+			for_all_sequences = true;
+	}
 
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
@@ -748,11 +798,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 					   get_database_name(MyDatabaseId));
 
 	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	if (for_all_tables && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 				 errmsg("must be superuser to create FOR ALL TABLES publication")));
 
+	/* FOR ALL SEQUENCES requires superuser */
+	if (for_all_sequences && !superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("must be superuser to create FOR ALL SEQUENCES publication")));
+
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
 	/* Check if name is used */
@@ -782,7 +838,9 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+		BoolGetDatum(for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -791,6 +849,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		BoolGetDatum(pubactions.pubdelete);
 	values[Anum_pg_publication_pubtruncate - 1] =
 		BoolGetDatum(pubactions.pubtruncate);
+	values[Anum_pg_publication_pubsequence - 1] =
+		BoolGetDatum(pubactions.pubsequence);
 	values[Anum_pg_publication_pubviaroot - 1] =
 		BoolGetDatum(publish_via_partition_root);
 
@@ -808,46 +868,88 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (for_all_tables || for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+
+	/*
+	 * If the publication might have either tables or sequences (directly or
+	 * through a schema), process that.
+	 */
+	if (!for_all_tables || !for_all_sequences)
 	{
-		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
-								   &schemaidlist);
+		ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+								   &tables, &sequences,
+								   &tables_schemaidlist,
+								   &sequences_schemaidlist);
 
 		/* FOR TABLES IN SCHEMA requires superuser */
-		if (schemaidlist != NIL && !superuser())
+		if (tables_schemaidlist != NIL && !superuser())
 			ereport(ERROR,
 					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					errmsg("must be superuser to create FOR TABLES IN SCHEMA publication"));
 
-		if (relations != NIL)
+		/* FOR SEQUENCES IN SCHEMA requires superuser */
+		if (sequences_schemaidlist != NIL && !superuser())
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create FOR ALL SEQUENCES IN SCHEMA publication"));
+
+		/* tables added directly */
+		if (tables != NIL)
 		{
 			List	   *rels;
 
-			rels = OpenTableList(relations);
+			rels = OpenRelationList(tables, PUB_OBJTYPE_TABLE);
+
 			TransformPubWhereClauses(rels, pstate->p_sourcetext,
 									 publish_via_partition_root);
 
 			CheckPubRelationColumnList(stmt->pubname, rels,
-									   schemaidlist != NIL,
+									   tables_schemaidlist != NIL,
 									   publish_via_partition_root);
 
-			PublicationAddTables(puboid, rels, true, NULL);
-			CloseTableList(rels);
+			PublicationAddRelations(puboid, rels, true, NULL);
+			CloseRelationList(rels);
+		}
+
+		/* sequences added directly */
+		if (sequences != NIL)
+		{
+			List	   *rels;
+
+			rels = OpenRelationList(sequences, PUB_OBJTYPE_SEQUENCE);
+
+			PublicationAddRelations(puboid, rels, true, NULL);
+			CloseRelationList(rels);
 		}
 
-		if (schemaidlist != NIL)
+		/* tables added through a schema */
+		if (tables_schemaidlist != NIL)
 		{
 			/*
 			 * Schema lock is held until the publication is created to prevent
 			 * concurrent schema deletion.
 			 */
-			LockSchemaList(schemaidlist);
-			PublicationAddSchemas(puboid, schemaidlist, true, NULL);
+			LockSchemaList(tables_schemaidlist);
+			PublicationAddSchemas(puboid,
+								  tables_schemaidlist, PUB_OBJTYPE_TABLE,
+								  true, NULL);
+		}
+
+		/* sequences added through a schema */
+		if (sequences_schemaidlist != NIL)
+		{
+			/*
+			 * Schema lock is held until the publication is created to prevent
+			 * concurrent schema deletion.
+			 */
+			LockSchemaList(sequences_schemaidlist);
+			PublicationAddSchemas(puboid,
+								  sequences_schemaidlist, PUB_OBJTYPE_SEQUENCE,
+								  true, NULL);
 		}
 	}
 
@@ -910,6 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 						   AccessShareLock);
 
 		root_relids = GetPublicationRelations(pubform->oid,
+											  PUB_OBJTYPE_TABLE,
 											  PUBLICATION_PART_ROOT);
 
 		foreach(lc, root_relids)
@@ -989,6 +1092,9 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 
 		values[Anum_pg_publication_pubtruncate - 1] = BoolGetDatum(pubactions.pubtruncate);
 		replaces[Anum_pg_publication_pubtruncate - 1] = true;
+
+		values[Anum_pg_publication_pubsequence - 1] = BoolGetDatum(pubactions.pubsequence);
+		replaces[Anum_pg_publication_pubsequence - 1] = true;
 	}
 
 	if (publish_via_partition_root_given)
@@ -1008,7 +1114,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1024,6 +1130,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 		 */
 		if (root_relids == NIL)
 			relids = GetPublicationRelations(pubform->oid,
+											 PUB_OBJTYPE_TABLE,
 											 PUBLICATION_PART_ALL);
 		else
 		{
@@ -1037,7 +1144,20 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 														lfirst_oid(lc));
 		}
 
+		/* tables */
 		schemarelids = GetAllSchemaPublicationRelations(pubform->oid,
+														PUB_OBJTYPE_TABLE,
+														PUBLICATION_PART_ALL);
+		relids = list_concat_unique_oid(relids, schemarelids);
+
+		/* sequences */
+		relids = list_concat_unique_oid(relids,
+										GetPublicationRelations(pubform->oid,
+																PUB_OBJTYPE_SEQUENCE,
+																PUBLICATION_PART_ALL));
+
+		schemarelids = GetAllSchemaPublicationRelations(pubform->oid,
+														PUB_OBJTYPE_SEQUENCE,
 														PUBLICATION_PART_ALL);
 		relids = list_concat_unique_oid(relids, schemarelids);
 
@@ -1092,7 +1212,7 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
 	if (!tables && stmt->action != AP_SetObjects)
 		return;
 
-	rels = OpenTableList(tables);
+	rels = OpenRelationList(tables, PUB_OBJTYPE_TABLE);
 
 	if (stmt->action == AP_AddObjects)
 	{
@@ -1103,13 +1223,14 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
 		CheckPubRelationColumnList(stmt->pubname, rels, publish_schema,
 								   pubform->pubviaroot);
 
-		PublicationAddTables(pubid, rels, false, stmt);
+		PublicationAddRelations(pubid, rels, false, stmt);
 	}
 	else if (stmt->action == AP_DropObjects)
-		PublicationDropTables(pubid, rels, false);
+		PublicationDropRelations(pubid, rels, false);
 	else						/* AP_SetObjects */
 	{
 		List	   *oldrelids = GetPublicationRelations(pubid,
+														PUB_OBJTYPE_TABLE,
 														PUBLICATION_PART_ROOT);
 		List	   *delrels = NIL;
 		ListCell   *oldlc;
@@ -1226,18 +1347,18 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
 		}
 
 		/* And drop them. */
-		PublicationDropTables(pubid, delrels, true);
+		PublicationDropRelations(pubid, delrels, true);
 
 		/*
 		 * Don't bother calculating the difference for adding, we'll catch and
 		 * skip existing ones when doing catalog update.
 		 */
-		PublicationAddTables(pubid, rels, true, stmt);
+		PublicationAddRelations(pubid, rels, true, stmt);
 
-		CloseTableList(delrels);
+		CloseRelationList(delrels);
 	}
 
-	CloseTableList(rels);
+	CloseRelationList(rels);
 }
 
 /*
@@ -1247,7 +1368,8 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
  */
 static void
 AlterPublicationSchemas(AlterPublicationStmt *stmt,
-						HeapTuple tup, List *schemaidlist)
+						HeapTuple tup, List *schemaidlist,
+						char objectType)
 {
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
@@ -1269,7 +1391,7 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
 		ListCell   *lc;
 		List	   *reloids;
 
-		reloids = GetPublicationRelations(pubform->oid, PUBLICATION_PART_ROOT);
+		reloids = GetPublicationRelations(pubform->oid, objectType, PUBLICATION_PART_ROOT);
 
 		foreach(lc, reloids)
 		{
@@ -1296,13 +1418,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
 			ReleaseSysCache(coltuple);
 		}
 
-		PublicationAddSchemas(pubform->oid, schemaidlist, false, stmt);
+		PublicationAddSchemas(pubform->oid, schemaidlist, objectType, false, stmt);
 	}
 	else if (stmt->action == AP_DropObjects)
-		PublicationDropSchemas(pubform->oid, schemaidlist, false);
+		PublicationDropSchemas(pubform->oid, schemaidlist, objectType, false);
 	else						/* AP_SetObjects */
 	{
-		List	   *oldschemaids = GetPublicationSchemas(pubform->oid);
+		List	   *oldschemaids = GetPublicationSchemas(pubform->oid, objectType);
 		List	   *delschemas = NIL;
 
 		/* Identify which schemas should be dropped */
@@ -1315,13 +1437,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
 		LockSchemaList(delschemas);
 
 		/* And drop them */
-		PublicationDropSchemas(pubform->oid, delschemas, true);
+		PublicationDropSchemas(pubform->oid, delschemas, objectType, true);
 
 		/*
 		 * Don't bother calculating the difference for adding, we'll catch and
 		 * skip existing ones when doing catalog update.
 		 */
-		PublicationAddSchemas(pubform->oid, schemaidlist, true, stmt);
+		PublicationAddSchemas(pubform->oid, schemaidlist, objectType, true, stmt);
 	}
 }
 
@@ -1331,12 +1453,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
  */
 static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
-					  List *tables, List *schemaidlist)
+					  List *tables, List *tables_schemaidlist,
+					  List *sequences, List *sequences_schemaidlist)
 {
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
-		schemaidlist && !superuser())
+		(tables_schemaidlist || sequences_schemaidlist) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 				 errmsg("must be superuser to add or set schemas")));
@@ -1345,13 +1468,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	if (tables_schemaidlist && pubform->puballtables)
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
 						NameStr(pubform->pubname)),
 				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
 
+	/*
+	 * Check that user is allowed to manipulate the publication sequences in
+	 * schema
+	 */
+	if (sequences_schemaidlist && pubform->puballsequences)
+		ereport(ERROR,
+				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				 errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+						NameStr(pubform->pubname)),
+				 errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
 	/* Check that user is allowed to manipulate the publication tables. */
 	if (tables && pubform->puballtables)
 		ereport(ERROR,
@@ -1359,6 +1493,107 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
 						NameStr(pubform->pubname)),
 				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+	/* Check that user is allowed to manipulate the publication sequences. */
+	if (sequences && pubform->puballsequences)
+		ereport(ERROR,
+				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				 errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+						NameStr(pubform->pubname)),
+				 errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove table to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+						  List *sequences)
+{
+	List	   *rels = NIL;
+	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+	Oid			pubid = pubform->oid;
+
+	/*
+	 * Nothing to do if no objects, except in SET: for that it is quite
+	 * possible that user has not specified any tables in which case we need
+	 * to remove all the existing tables.
+	 */
+	if (!sequences && stmt->action != AP_SetObjects)
+		return;
+
+	rels = OpenRelationList(sequences, PUB_OBJTYPE_SEQUENCE);
+
+	if (stmt->action == AP_AddObjects)
+	{
+		PublicationAddRelations(pubid, rels, false, stmt);
+	}
+	else if (stmt->action == AP_DropObjects)
+		PublicationDropRelations(pubid, rels, false);
+	else						/* AP_SetObjects */
+	{
+		List	   *oldrelids = GetPublicationRelations(pubid,
+														PUB_OBJTYPE_SEQUENCE,
+														PUBLICATION_PART_ROOT);
+		List	   *delrels = NIL;
+		ListCell   *oldlc;
+
+		/*
+		 * To recreate the relation list for the publication, look for
+		 * existing relations that do not need to be dropped.
+		 */
+		foreach(oldlc, oldrelids)
+		{
+			Oid			oldrelid = lfirst_oid(oldlc);
+			ListCell   *newlc;
+			PublicationRelInfo *oldrel;
+			bool		found = false;
+
+			foreach(newlc, rels)
+			{
+				PublicationRelInfo *newpubrel;
+
+				newpubrel = (PublicationRelInfo *) lfirst(newlc);
+
+				/*
+				 * Check if any of the new set of relations matches with the
+				 * existing relations in the publication.
+				 */
+				if (RelationGetRelid(newpubrel->relation) == oldrelid)
+				{
+					found = true;
+					break;
+				}
+			}
+
+			/*
+			 * Add the non-matched relations to a list so that they can be
+			 * dropped.
+			 */
+			if (!found)
+			{
+				oldrel = palloc(sizeof(PublicationRelInfo));
+				oldrel->whereClause = NULL;
+				oldrel->columns = NIL;
+				oldrel->relation = table_open(oldrelid,
+											  ShareUpdateExclusiveLock);
+				delrels = lappend(delrels, oldrel);
+			}
+		}
+
+		/* And drop them. */
+		PublicationDropRelations(pubid, delrels, true);
+
+		/*
+		 * Don't bother calculating the difference for adding, we'll catch and
+		 * skip existing ones when doing catalog update.
+		 */
+		PublicationAddRelations(pubid, rels, true, stmt);
+
+		CloseRelationList(delrels);
+	}
+
+	CloseRelationList(rels);
 }
 
 /*
@@ -1396,14 +1631,20 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
 		AlterPublicationOptions(pstate, stmt, rel, tup);
 	else
 	{
-		List	   *relations = NIL;
-		List	   *schemaidlist = NIL;
+		List	   *tables = NIL;
+		List	   *sequences = NIL;
+		List	   *tables_schemaidlist = NIL;
+		List	   *sequences_schemaidlist = NIL;
 		Oid			pubid = pubform->oid;
 
-		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
-								   &schemaidlist);
+		ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+								   &tables, &sequences,
+								   &tables_schemaidlist,
+								   &sequences_schemaidlist);
 
-		CheckAlterPublication(stmt, tup, relations, schemaidlist);
+		CheckAlterPublication(stmt, tup,
+							  tables, tables_schemaidlist,
+							  sequences, sequences_schemaidlist);
 
 		heap_freetuple(tup);
 
@@ -1424,9 +1665,16 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
 					errmsg("publication \"%s\" does not exist",
 						   stmt->pubname));
 
-		AlterPublicationTables(stmt, tup, relations, pstate->p_sourcetext,
-							   schemaidlist != NIL);
-		AlterPublicationSchemas(stmt, tup, schemaidlist);
+		AlterPublicationTables(stmt, tup, tables, pstate->p_sourcetext,
+							   tables_schemaidlist != NIL);
+
+		AlterPublicationSchemas(stmt, tup, tables_schemaidlist,
+								PUB_OBJTYPE_TABLE);
+
+		AlterPublicationSequences(stmt, tup, sequences);
+
+		AlterPublicationSchemas(stmt, tup, sequences_schemaidlist,
+								PUB_OBJTYPE_SEQUENCE);
 	}
 
 	/* Cleanup. */
@@ -1494,7 +1742,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1530,6 +1778,7 @@ RemovePublicationSchemaById(Oid psoid)
 	 * partitions.
 	 */
 	schemaRels = GetSchemaPublicationRelations(pubsch->pnnspid,
+											   pubsch->pntype,
 											   PUBLICATION_PART_ALL);
 	InvalidatePublicationRels(schemaRels);
 
@@ -1546,10 +1795,10 @@ RemovePublicationSchemaById(Oid psoid)
  * add them to a publication.
  */
 static List *
-OpenTableList(List *tables)
+OpenRelationList(List *rels, char objectType)
 {
 	List	   *relids = NIL;
-	List	   *rels = NIL;
+	List	   *result = NIL;
 	ListCell   *lc;
 	List	   *relids_with_rf = NIL;
 	List	   *relids_with_collist = NIL;
@@ -1557,19 +1806,35 @@ OpenTableList(List *tables)
 	/*
 	 * Open, share-lock, and check all the explicitly-specified relations
 	 */
-	foreach(lc, tables)
+	foreach(lc, rels)
 	{
 		PublicationTable *t = lfirst_node(PublicationTable, lc);
 		bool		recurse = t->relation->inh;
 		Relation	rel;
 		Oid			myrelid;
 		PublicationRelInfo *pub_rel;
+		char		myrelkind;
 
 		/* Allow query cancel in case this takes a long time */
 		CHECK_FOR_INTERRUPTS();
 
 		rel = table_openrv(t->relation, ShareUpdateExclusiveLock);
 		myrelid = RelationGetRelid(rel);
+		myrelkind = get_rel_relkind(myrelid);
+
+		/*
+		 * Make sure the relkind matches the expected object type. This may
+		 * happen e.g. when adding a sequence using ADD TABLE or a table using
+		 * ADD SEQUENCE).
+		 *
+		 * XXX We let through unsupported object types (views etc.). Those
+		 * will be caught later in check_publication_add_relation.
+		 */
+		if (pub_get_object_type_for_relkind(myrelkind) != PUB_OBJTYPE_UNSUPPORTED &&
+			pub_get_object_type_for_relkind(myrelkind) != objectType)
+			ereport(ERROR,
+					errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+					errmsg("object type does not match type expected by command"));
 
 		/*
 		 * Filter out duplicates if user specifies "foo, foo".
@@ -1602,7 +1867,7 @@ OpenTableList(List *tables)
 		pub_rel->relation = rel;
 		pub_rel->whereClause = t->whereClause;
 		pub_rel->columns = t->columns;
-		rels = lappend(rels, pub_rel);
+		result = lappend(result, pub_rel);
 		relids = lappend_oid(relids, myrelid);
 
 		if (t->whereClause)
@@ -1671,10 +1936,9 @@ OpenTableList(List *tables)
 				pub_rel->relation = rel;
 				/* child inherits WHERE clause from parent */
 				pub_rel->whereClause = t->whereClause;
-
 				/* child inherits column list from parent */
 				pub_rel->columns = t->columns;
-				rels = lappend(rels, pub_rel);
+				result = lappend(result, pub_rel);
 				relids = lappend_oid(relids, childrelid);
 
 				if (t->whereClause)
@@ -1689,14 +1953,14 @@ OpenTableList(List *tables)
 	list_free(relids);
 	list_free(relids_with_rf);
 
-	return rels;
+	return result;
 }
 
 /*
  * Close all relations in the list.
  */
 static void
-CloseTableList(List *rels)
+CloseRelationList(List *rels)
 {
 	ListCell   *lc;
 
@@ -1744,12 +2008,12 @@ LockSchemaList(List *schemalist)
  * Add listed tables to the publication.
  */
 static void
-PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
-					 AlterPublicationStmt *stmt)
+PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
+						AlterPublicationStmt *stmt)
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, rels)
 	{
@@ -1778,7 +2042,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
  * Remove listed tables from the publication.
  */
 static void
-PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
+PublicationDropRelations(Oid pubid, List *rels, bool missing_ok)
 {
 	ObjectAddress obj;
 	ListCell   *lc;
@@ -1823,19 +2087,19 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
  * Add listed schemas to the publication.
  */
 static void
-PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
-					  AlterPublicationStmt *stmt)
+PublicationAddSchemas(Oid pubid, List *schemas, char objectType,
+					  bool if_not_exists, AlterPublicationStmt *stmt)
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, schemas)
 	{
 		Oid			schemaid = lfirst_oid(lc);
 		ObjectAddress obj;
 
-		obj = publication_add_schema(pubid, schemaid, if_not_exists);
+		obj = publication_add_schema(pubid, schemaid, objectType, if_not_exists);
 		if (stmt)
 		{
 			EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
@@ -1851,7 +2115,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
  * Remove listed schemas from the publication.
  */
 static void
-PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
+PublicationDropSchemas(Oid pubid, List *schemas, char objectType, bool missing_ok)
 {
 	ObjectAddress obj;
 	ListCell   *lc;
@@ -1861,10 +2125,11 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
 	{
 		Oid			schemaid = lfirst_oid(lc);
 
-		psid = GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+		psid = GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
 							   Anum_pg_publication_namespace_oid,
 							   ObjectIdGetDatum(schemaid),
-							   ObjectIdGetDatum(pubid));
+							   ObjectIdGetDatum(pubid),
+							   CharGetDatum(objectType));
 		if (!OidIsValid(psid))
 		{
 			if (missing_ok)
@@ -1919,6 +2184,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 							NameStr(form->pubname)),
 					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
 
+		if (form->puballsequences && !superuser_arg(newOwnerId))
+			ereport(ERROR,
+					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					 errmsg("permission denied to change owner of publication \"%s\"",
+							NameStr(form->pubname)),
+					 errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 28f8522264..b43b85bee6 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,75 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ */
+void
+SetSequence(Oid seq_relid, int64 value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+	HeapTuple       tuple;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page' buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* Copy the existing sequence tuple. */
+	tuple = heap_copytuple(&seqdatatuple);
+
+	/* Now we're done with the old page */
+	UnlockReleaseBuffer(buf);
+
+	/*
+	 * Modify the copied tuple to update the sequence state (similar to what
+	 * ResetSequence does).
+	 */
+	seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+	seq->last_value = value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	/* make sure the relfilenode creation is associated with the XID */
+	if (XLogLogicalInfoActive())
+		GetCurrentTransactionId();
+
+	/*
+	 * Create a new storage file for the sequence - this is needed for the
+	 * transactional behavior.
+	 */
+	RelationSetNewRelfilenumber(seqrel, seqrel->rd_rel->relpersistence);
+
+	/*
+	 * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+	 * unfrozen XIDs.  Same with relminmxid, since a sequence will never
+	 * contain multixacts.
+	 */
+	Assert(seqrel->rd_rel->relfrozenxid == InvalidTransactionId);
+	Assert(seqrel->rd_rel->relminmxid == InvalidMultiXactId);
+
+	/*
+	 * Insert the modified tuple into the new storage file. This does all the
+	 * necessary WAL-logging etc.
+	 */
+	fill_seq_with_data(seqrel, tuple);
+
+	/* Clear local cache so that we don't think we have cached numbers */
+	/* Note that we do not change the currval() state */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +547,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -551,7 +622,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -680,7 +751,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -976,7 +1047,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1180,7 +1251,8 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn)
 {
 	Page		page;
 	ItemId		lp;
@@ -1197,6 +1269,13 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/*
+	 * If the caller requested it, set the page LSN. This allows deciding
+	 * which sequence changes are before/after the returned sequence state.
+	 */
+	if (lsn)
+		*lsn = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1804,7 +1883,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1819,6 +1898,67 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+	XLogRecPtr	lsn;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4];
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	is_called = seq->is_called;
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	values[0] = LSNGetDatum(lsn);
+	values[1] = Int64GetDatum(last_value);
+	values[2] = Int64GetDatum(log_cnt);
+	values[3] = BoolGetDatum(is_called);
+
+	memset(nulls, 0, sizeof(nulls));
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e407428dbc..330470780b 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/pg_database_d.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
+#include "catalog/pg_subscription_seq.h"
 #include "catalog/pg_type.h"
 #include "commands/dbcommands.h"
 #include "commands/defrem.h"
@@ -72,6 +73,7 @@
 #define SUBOPT_FAILOVER				0x00002000
 #define SUBOPT_LSN					0x00004000
 #define SUBOPT_ORIGIN				0x00008000
+#define SUBOPT_SEQUENCES            0x00010000
 
 /* check if the 'val' has 'bits' set */
 #define IsSet(val, bits)  (((val) & (bits)) == (bits))
@@ -99,6 +101,7 @@ typedef struct SubOpts
 	bool		failover;
 	char	   *origin;
 	XLogRecPtr	lsn;
+	bool		sequences;
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
@@ -161,6 +164,8 @@ parse_subscription_options(ParseState *pstate, List *stmt_options,
 		opts->failover = false;
 	if (IsSet(supported_opts, SUBOPT_ORIGIN))
 		opts->origin = pstrdup(LOGICALREP_ORIGIN_ANY);
+	if (IsSet(supported_opts, SUBOPT_SEQUENCES))
+		opts->sequences = false;
 
 	/* Parse options */
 	foreach(lc, stmt_options)
@@ -366,6 +371,15 @@ parse_subscription_options(ParseState *pstate, List *stmt_options,
 			opts->specified_opts |= SUBOPT_LSN;
 			opts->lsn = lsn;
 		}
+		else if (IsSet(supported_opts, SUBOPT_SEQUENCES) &&
+				 strcmp(defel->defname, "sequences") == 0)
+		{
+			if (IsSet(opts->specified_opts, SUBOPT_SEQUENCES))
+				errorConflictingDefElem(defel, pstate);
+
+			opts->specified_opts |= SUBOPT_SEQUENCES;
+			opts->sequences = defGetBoolean(defel);
+		}
 		else
 			ereport(ERROR,
 					(errcode(ERRCODE_SYNTAX_ERROR),
@@ -603,7 +617,8 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 					  SUBOPT_SYNCHRONOUS_COMMIT | SUBOPT_BINARY |
 					  SUBOPT_STREAMING | SUBOPT_TWOPHASE_COMMIT |
 					  SUBOPT_DISABLE_ON_ERR | SUBOPT_PASSWORD_REQUIRED |
-					  SUBOPT_RUN_AS_OWNER | SUBOPT_FAILOVER | SUBOPT_ORIGIN);
+					  SUBOPT_RUN_AS_OWNER | SUBOPT_FAILOVER | SUBOPT_ORIGIN |
+					  SUBOPT_SEQUENCES);
 	parse_subscription_options(pstate, stmt->options, supported_opts, &opts);
 
 	/*
@@ -710,6 +725,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	values[Anum_pg_subscription_subpasswordrequired - 1] = BoolGetDatum(opts.passwordrequired);
 	values[Anum_pg_subscription_subrunasowner - 1] = BoolGetDatum(opts.runasowner);
 	values[Anum_pg_subscription_subfailover - 1] = BoolGetDatum(opts.failover);
+	values[Anum_pg_subscription_subsequences - 1] = BoolGetDatum(opts.sequences);
 	values[Anum_pg_subscription_subconninfo - 1] =
 		CStringGetTextDatum(conninfo);
 	if (opts.slot_name)
@@ -763,6 +779,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
 
+			/*
+			 * Add sequences, but only if the subscription explicitly enabled
+			 * them to be replicated.
+			 */
+			if (opts.sequences)
+			{
+				List *sequences = fetch_sequence_list(wrconn, publications);
+				copy_subscription_sequences(wrconn, subid, sequences);
+			}
+
 			/*
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
@@ -1077,6 +1103,131 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Refresh the sequences data of the subscription.
+ */
+static void
+AlterSubscription_refreshsequences(Subscription *sub)
+{
+	char	   *err;
+	List	   *pubseq_names = NIL;
+	List	   *subseq_states;
+	Oid		   *subseq_local_oids;
+	Oid		   *pubseq_local_oids;
+	ListCell   *lc;
+	int			off;
+	int			subrel_count;
+	Relation	rel = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	if (!sub->sequences)
+		return;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	PG_TRY();
+	{
+		/* Get the sequences from the publisher. */
+		pubseq_names = fetch_sequence_list(wrconn, sub->publications);
+
+		/* Get local sequence list. */
+		subseq_states = GetSubscriptionSequences(sub->oid);
+		subrel_count = list_length(subseq_states);
+
+		/*
+		 * Build qsorted array of local table oids for faster lookup. This can
+		 * potentiallGetSubscriptionRelationsy contain all tables in the database so speed of lookup
+		 * is important.
+		 */
+		subseq_local_oids = palloc(subrel_count * sizeof(Oid));
+		off = 0;
+		foreach(lc, subseq_states)
+		{
+			SubscriptionSeqInfo *seqinfo = (SubscriptionSeqInfo *) lfirst(lc);
+
+			subseq_local_oids[off++] = seqinfo->seqid;
+		}
+
+		qsort(subseq_local_oids, subrel_count, sizeof(Oid), oid_cmp);
+
+		/*
+		 * Walk over the remote tables and try to match them to locally known
+		 * tables. If the table is not known locally create a new state for
+		 * it.
+		 *
+		 * Also builds array of local oids of remote tables for the next step.
+		 */
+		off = 0;
+		pubseq_local_oids = palloc(list_length(pubseq_names) * sizeof(Oid));
+
+		foreach(lc, pubseq_names)
+		{
+			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			Oid			relid;
+
+			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+			/* Check for supported relkind. */
+			CheckSubscriptionRelkind(get_rel_relkind(relid),
+									 rv->schemaname, rv->relname);
+
+			pubseq_local_oids[off++] = relid;
+		}
+
+		/*
+		 * Next remove state for tables we should not care about anymore using
+		 * the data we collected above
+		 */
+		qsort(pubseq_local_oids, list_length(pubseq_names),
+			  sizeof(Oid), oid_cmp);
+
+		for (off = 0; off < subrel_count; off++)
+		{
+			Oid			relid = subseq_local_oids[off];
+
+			if (!bsearch(&relid, pubseq_local_oids,
+						 list_length(pubseq_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * This locking ensures that the state of rels won't change
+				 * till we are done with this refresh operation.
+				 */
+				if (!rel)
+					rel = table_open(SubscriptionSeqRelationId, AccessExclusiveLock);
+
+				RemoveSubscriptionRel(sub->oid, relid);
+
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+			}
+		}
+
+		copy_subscription_sequences(wrconn, sub->oid, pubseq_names);
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+
+	if (rel)
+		table_close(rel, NoLock);
+}
+
 /*
  * Alter the existing subscription.
  */
@@ -1263,6 +1414,13 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					replaces[Anum_pg_subscription_suborigin - 1] = true;
 				}
 
+				if (IsSet(opts.specified_opts, SUBOPT_SEQUENCES))
+				{
+					values[Anum_pg_subscription_subsequences - 1] =
+						BoolGetDatum(opts.sequences);
+					replaces[Anum_pg_subscription_subsequences - 1] = true;
+				}
+
 				update_tuple = true;
 				break;
 			}
@@ -1404,6 +1562,20 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH SEQUENCES FOR PUBLICATION");
+
+				AlterSubscription_refreshsequences(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_REFRESH:
 			{
 				if (!sub->enabled)
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 4d582950b7..8eea735441 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -455,7 +455,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <node>	pub_obj_type
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10557,12 +10558,16 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION FOR ALL TABLES [WITH options]
  *
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
+ *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
  * pub_obj is one of:
  *
  *		TABLE table [, ...]
+ *		SEQUENCE table [, ...]
  *		TABLES IN SCHEMA schema [, ...]
+ *		SEQUENCES IN SCHEMA schema [, ...]
  *
  *****************************************************************************/
 
@@ -10575,13 +10580,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
 					n->options = $7;
-					n->for_all_tables = true;
+					n->for_all_objects = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10632,6 +10637,26 @@ PublicationObjSpec:
 					$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
 					$$->location = @4;
 				}
+			| SEQUENCE relation_expr
+				{
+					$$ = makeNode(PublicationObjSpec);
+					$$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+					$$->pubtable = makeNode(PublicationTable);
+					$$->pubtable->relation = $2;
+				}
+			| SEQUENCES IN_P SCHEMA ColId
+				{
+					$$ = makeNode(PublicationObjSpec);
+					$$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+					$$->name = $4;
+					$$->location = @4;
+				}
+			| SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+				{
+					$$ = makeNode(PublicationObjSpec);
+					$$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+					$$->location = @4;
+				}
 			| ColId opt_column_list OptWhereClause
 				{
 					$$ = makeNode(PublicationObjSpec);
@@ -10693,6 +10718,19 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+pub_obj_type:	TABLES
+					{ $$ = (Node *) makeString("tables"); }
+				| SEQUENCES
+					{ $$ = (Node *) makeString("sequences"); }
+	;
+
+pub_obj_type_list:	pub_obj_type
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' pub_obj_type
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -10706,7 +10744,9 @@ pub_obj_list:	PublicationObjSpec
  * pub_obj is one of:
  *
  *		TABLE table_name [, ...]
+ *		SEQUENCE table_name [, ...]
  *		TABLES IN SCHEMA schema_name [, ...]
+ *		SEQUENCES IN SCHEMA schema_name [, ...]
  *
  *****************************************************************************/
 
@@ -10807,6 +10847,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
@@ -19435,7 +19484,8 @@ preprocess_pubobj_list(List *pubobjspec_list, core_yyscan_t yyscanner)
 		if (pubobj->pubobjtype == PUBLICATIONOBJ_CONTINUATION)
 			pubobj->pubobjtype = prevobjtype;
 
-		if (pubobj->pubobjtype == PUBLICATIONOBJ_TABLE)
+		if (pubobj->pubobjtype == PUBLICATIONOBJ_TABLE ||
+			pubobj->pubobjtype == PUBLICATIONOBJ_SEQUENCE)
 		{
 			/* relation name or pubtable must be set for this type of object */
 			if (!pubobj->name && !pubobj->pubtable)
@@ -19486,6 +19536,30 @@ preprocess_pubobj_list(List *pubobjspec_list, core_yyscan_t yyscanner)
 						errmsg("invalid schema name"),
 						parser_errposition(pubobj->location));
 		}
+		else if (pubobj->pubobjtype == PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA ||
+				 pubobj->pubobjtype == PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA)
+		{
+			/* WHERE clause is not allowed on a schema object */
+			if (pubobj->pubtable && pubobj->pubtable->whereClause)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("WHERE clause not allowed for schema"),
+						parser_errposition(pubobj->location));
+
+			/*
+			 * We can distinguish between the different type of schema
+			 * objects based on whether name and pubtable is set.
+			 */
+			if (pubobj->name)
+				pubobj->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+			else if (!pubobj->name && !pubobj->pubtable)
+				pubobj->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+			else
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid schema name at or near"),
+						parser_errposition(pubobj->location));
+		}
 
 		prevobjtype = pubobj->pubobjtype;
 	}
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..639f685c22
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,357 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2012-2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/indexing.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_subscription_seq.h"
+#include "commands/sequence.h"
+#include "executor/executor.h"
+#include "nodes/makefuncs.h"
+#include "replication/worker_internal.h"
+#include "storage/lmgr.h"
+#include "utils/builtins.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+
+/*
+ * Add sequence for a subscription.
+ */
+static void
+AddSubscriptionSequence(Oid subid, Oid relid, XLogRecPtr sublsn)
+{
+	Relation	rel;
+	HeapTuple	tup;
+	bool		nulls[Natts_pg_subscription_seq];
+	Datum		values[Natts_pg_subscription_seq];
+	bool		replaces[Natts_pg_subscription_seq];
+
+	/* Form the tuple. */
+	memset(values, 0, sizeof(values));
+	memset(nulls, false, sizeof(nulls));
+	values[Anum_pg_subscription_seq_sssubid - 1] = ObjectIdGetDatum(subid);
+	values[Anum_pg_subscription_seq_ssseqid - 1] = ObjectIdGetDatum(relid);
+	if (sublsn != InvalidXLogRecPtr)
+		values[Anum_pg_subscription_seq_sssublsn - 1] = LSNGetDatum(sublsn);
+	else
+		nulls[Anum_pg_subscription_seq_sssublsn - 1] = true;
+
+	LockSharedObject(SubscriptionRelationId, subid, 0, AccessShareLock);
+
+	rel = table_open(SubscriptionSeqRelationId, RowExclusiveLock);
+
+	/* Try finding existing mapping. */
+	tup = SearchSysCacheCopy2(SUBSCRIPTIONSEQMAP,
+							  ObjectIdGetDatum(relid),
+							  ObjectIdGetDatum(subid));
+	if (!HeapTupleIsValid(tup))
+	{
+		tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
+
+		/* Insert tuple into catalog. */
+		CatalogTupleInsert(rel, tup);
+
+		heap_freetuple(tup);
+	}
+	else
+	{
+		memset(replaces, true, sizeof(replaces));
+
+		tup = heap_modify_tuple(tup, RelationGetDescr(rel), values, nulls,
+								replaces);
+
+		/* Update the catalog. */
+		CatalogTupleUpdate(rel, &tup->t_self, tup);
+	}
+
+	/* Cleanup. */
+	table_close(rel, NoLock);
+}
+
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	ListCell   *lc;
+	bool		first;
+	List	   *tablelist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "  FROM pg_catalog.pg_publication_sequences s\n"
+						   " WHERE s.pubname IN (");
+	first = true;
+	foreach(lc, publications)
+	{
+		char	   *pubname = strVal(lfirst(lc));
+
+		if (first)
+			first = false;
+		else
+			appendStringInfoString(&cmd, ", ");
+
+		appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+	}
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of replicated sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		tablelist = lappend(tablelist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return tablelist;
+}
+
+/*
+ * Fetch sequence data (current state) from the remote node, including the
+ * page LSN.
+ */
+static int64
+fetch_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {INT8OID, LSNOID};
+	int64		value = (Datum) 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT (last_value + log_cnt), page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of replicated tables from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		bool		isnull;
+
+		value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		*lsn = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char			relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch table info for table \"%s.%s\" from publisher: %s",
+						nspname, RelationGetRelationName(rel), res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("table \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	/*
+	 * Logical replication of sequences is based on decoding WAL records,
+	 * describing the "next" state of the sequence the current state in the
+	 * relfilenode is yet to reach. But during the initial sync we read the
+	 * current state, so we need to reconstruct the WAL record logged when we
+	 * started the current batch of sequence values.
+	 *
+	 * Otherwise we might get duplicate values (on subscriber) if we failed
+	 * over right after the sync.
+	 */
+	sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
+
+	/* sets the sequences in non-transactional way */
+	SetSequence(RelationGetRelid(rel), sequence_value);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Copy subscription's sequence data from the publisher.
+ */
+void
+copy_subscription_sequences(WalReceiverConn *conn, Oid subid, List *sequences)
+{
+	WalRcvExecResult *res;
+	char		slotname[NAMEDATALEN] = {0};
+	XLogRecPtr	origin_startpos = InvalidXLogRecPtr;
+	ListCell   *lc;
+
+	/*
+	 * Start a transaction in the remote node in REPEATABLE READ mode.  This
+	 * ensures that both the replication slot we create (see below) and the
+	 * COPY are consistent with each other.
+	 */
+	res = walrcv_exec(conn,
+						"BEGIN READ ONLY ISOLATION LEVEL REPEATABLE READ",
+						0, NULL);
+	if (res->status != WALRCV_OK_COMMAND)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence copy could not start transaction on publisher: %s",
+						res->err)));
+	walrcv_clear_result(res);
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/* Create a new temporary logical decoding slot */
+	walrcv_create_slot(conn, slotname, true /* temporary */ ,
+						false /* two_phase */ , false, /* failover */
+						CRS_USE_SNAPSHOT, &origin_startpos);
+
+	foreach(lc, sequences)
+	{
+		RangeVar   *rv = (RangeVar *) lfirst(lc);
+		Oid			relid;
+		XLogRecPtr	sequence_lsn = InvalidXLogRecPtr;
+		Relation	sequencerel;
+
+		relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+		/* Check for supported relkind. */
+		CheckSubscriptionRelkind(get_rel_relkind(relid),
+								 rv->schemaname, rv->relname);
+
+		sequencerel = table_open(relid, RowExclusiveLock);
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or superuser,
+		 * who has it implicitly), but other roles should not be able to
+		 * circumvent RLS.  Disallow logical replication into RLS enabled
+		 * relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into relation with row-level security enabled: \"%s\"",
+							GetUserNameFromId(GetUserId(), true),
+							RelationGetRelationName(sequencerel))));
+
+		sequence_lsn = copy_sequence(conn, sequencerel);
+
+		AddSubscriptionSequence(subid, relid, sequence_lsn);
+
+		ereport(LOG,
+				errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+					   get_subscription_name(subid, false), RelationGetRelationName(sequencerel)));
+		table_close(sequencerel, NoLock);
+	}
+
+	res = walrcv_exec(conn, "COMMIT", 0, NULL);
+	if (res->status != WALRCV_OK_COMMAND)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence copy could not finish transaction on publisher: %s",
+						res->err));
+	walrcv_clear_result(res);
+}
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index d2b35cfb96..8d737ba9c9 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -15,6 +15,7 @@
 #include "access/tupconvert.h"
 #include "catalog/partition.h"
 #include "catalog/pg_publication.h"
+#include "catalog/pg_publication_namespace.h"
 #include "catalog/pg_publication_rel.h"
 #include "catalog/pg_subscription.h"
 #include "commands/defrem.h"
@@ -906,9 +907,10 @@ pgoutput_row_filter_init(PGOutputData *data, List *publications,
 		 * (even if other publications have a row filter).
 		 */
 		if (!pub->alltables &&
-			!SearchSysCacheExists2(PUBLICATIONNAMESPACEMAP,
+			!SearchSysCacheExists3(PUBLICATIONNAMESPACEMAP,
 								   ObjectIdGetDatum(schemaid),
-								   ObjectIdGetDatum(pub->oid)))
+								   ObjectIdGetDatum(pub->oid),
+								   PUB_OBJTYPE_TABLE))
 		{
 			/*
 			 * Check for the presence of a row filter in this publication.
@@ -1997,18 +1999,19 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
 	{
 		Oid			schemaId = get_rel_namespace(relid);
 		List	   *pubids = GetRelationPublications(relid);
+		char		relkind = get_rel_relkind(relid);
+		char		objectType = pub_get_object_type_for_relkind(relkind);
 
 		/*
 		 * We don't acquire a lock on the namespace system table as we build
 		 * the cache entry using a historic snapshot and all the later changes
 		 * are absorbed while decoding WAL.
 		 */
-		List	   *schemaPubids = GetSchemaPublications(schemaId);
+		List	   *schemaPubids = GetSchemaPublications(schemaId, objectType);
 		ListCell   *lc;
 		Oid			publish_as_relid = relid;
 		int			publish_ancestor_level = 0;
 		bool		am_partition = get_rel_relispartition(relid);
-		char		relkind = get_rel_relkind(relid);
 		List	   *rel_publications = NIL;
 
 		/* Reload publications if needed before use. */
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index cc9b0c6524..1ddd7644e5 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -55,6 +55,7 @@
 #include "catalog/pg_opclass.h"
 #include "catalog/pg_proc.h"
 #include "catalog/pg_publication.h"
+#include "catalog/pg_publication_namespace.h"
 #include "catalog/pg_rewrite.h"
 #include "catalog/pg_shseclabel.h"
 #include "catalog/pg_statistic_ext.h"
@@ -5687,6 +5688,8 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
 	Oid			schemaid;
 	List	   *ancestors = NIL;
 	Oid			relid = RelationGetRelid(relation);
+	char		relkind = relation->rd_rel->relkind;
+	char		objType;
 
 	/*
 	 * If not publishable, it publishes no actions.  (pgoutput_change() will
@@ -5717,8 +5720,15 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
 	/* Fetch the publication membership info. */
 	puboids = GetRelationPublications(relid);
 	schemaid = RelationGetNamespace(relation);
-	puboids = list_concat_unique_oid(puboids, GetSchemaPublications(schemaid));
+	objType = pub_get_object_type_for_relkind(relkind);
 
+	puboids = list_concat_unique_oid(puboids,
+									 GetSchemaPublications(schemaid, objType));
+
+	/*
+	 * If this is a partion (and thus a table), lookup all ancestors and track
+	 * all publications them too.
+	 */
 	if (relation->rd_rel->relispartition)
 	{
 		/* Add publications that the ancestors are in too. */
@@ -5730,12 +5740,23 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
 
 			puboids = list_concat_unique_oid(puboids,
 											 GetRelationPublications(ancestor));
+
+			/* include all publications publishing schema of all ancestors */
 			schemaid = get_rel_namespace(ancestor);
 			puboids = list_concat_unique_oid(puboids,
-											 GetSchemaPublications(schemaid));
+											 GetSchemaPublications(schemaid,
+																   PUB_OBJTYPE_TABLE));
 		}
 	}
-	puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
+
+	/*
+	 * Consider also FOR ALL TABLES and FOR ALL SEQUENCES publications,
+	 * depending on the relkind of the relation.
+	 */
+	if (relation->rd_rel->relkind == RELKIND_SEQUENCE)
+		puboids = list_concat_unique_oid(puboids, GetAllSequencesPublications());
+	else
+		puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
 
 	foreach(lc, puboids)
 	{
@@ -5754,6 +5775,7 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
 		pubdesc->pubactions.pubupdate |= pubform->pubupdate;
 		pubdesc->pubactions.pubdelete |= pubform->pubdelete;
 		pubdesc->pubactions.pubtruncate |= pubform->pubtruncate;
+		pubdesc->pubactions.pubsequence |= pubform->pubsequence;
 
 		/*
 		 * Check if all columns referenced in the filter expression are part
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e324070828..8f0e375cf8 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4185,10 +4185,12 @@ getPublications(Archive *fout, int *numPublications)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
 	int			i_pubtruncate;
+	int			i_pubsequence;
 	int			i_pubviaroot;
 	int			i,
 				ntups;
@@ -4204,23 +4206,29 @@ getPublications(Archive *fout, int *numPublications)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 170000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubsequence, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false as p.pubsequence, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubsequence, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubsequence, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4232,10 +4240,12 @@ getPublications(Archive *fout, int *numPublications)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
 	i_pubtruncate = PQfnumber(res, "pubtruncate");
+	i_pubsequence = PQfnumber(res, "pubsequence");
 	i_pubviaroot = PQfnumber(res, "pubviaroot");
 
 	pubinfo = pg_malloc(ntups * sizeof(PublicationInfo));
@@ -4251,6 +4261,8 @@ getPublications(Archive *fout, int *numPublications)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4259,6 +4271,8 @@ getPublications(Archive *fout, int *numPublications)
 			(strcmp(PQgetvalue(res, i, i_pubdelete), "t") == 0);
 		pubinfo[i].pubtruncate =
 			(strcmp(PQgetvalue(res, i, i_pubtruncate), "t") == 0);
+		pubinfo[i].pubsequence =
+			(strcmp(PQgetvalue(res, i, i_pubsequence), "t") == 0);
 		pubinfo[i].pubviaroot =
 			(strcmp(PQgetvalue(res, i, i_pubviaroot), "t") == 0);
 
@@ -4304,6 +4318,9 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
 
+	if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
 	{
@@ -4338,6 +4355,15 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 		first = false;
 	}
 
+	if (pubinfo->pubsequence)
+	{
+		if (!first)
+			appendPQExpBufferStr(query, ", ");
+
+		appendPQExpBufferStr(query, "sequence");
+		first = false;
+	}
+
 	appendPQExpBufferChar(query, '\'');
 
 	if (pubinfo->pubviaroot)
@@ -4384,6 +4410,7 @@ getPublicationNamespaces(Archive *fout)
 	int			i_oid;
 	int			i_pnpubid;
 	int			i_pnnspid;
+	int			i_pntype;
 	int			i,
 				j,
 				ntups;
@@ -4395,7 +4422,7 @@ getPublicationNamespaces(Archive *fout)
 
 	/* Collect all publication membership info. */
 	appendPQExpBufferStr(query,
-						 "SELECT tableoid, oid, pnpubid, pnnspid "
+						 "SELECT tableoid, oid, pnpubid, pnnspid, pntype "
 						 "FROM pg_catalog.pg_publication_namespace");
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
 
@@ -4405,6 +4432,7 @@ getPublicationNamespaces(Archive *fout)
 	i_oid = PQfnumber(res, "oid");
 	i_pnpubid = PQfnumber(res, "pnpubid");
 	i_pnnspid = PQfnumber(res, "pnnspid");
+	i_pntype = PQfnumber(res, "pntype");
 
 	/* this allocation may be more than we need */
 	pubsinfo = pg_malloc(ntups * sizeof(PublicationSchemaInfo));
@@ -4414,6 +4442,7 @@ getPublicationNamespaces(Archive *fout)
 	{
 		Oid			pnpubid = atooid(PQgetvalue(res, i, i_pnpubid));
 		Oid			pnnspid = atooid(PQgetvalue(res, i, i_pnnspid));
+		char		pntype = PQgetvalue(res, i, i_pntype)[0];
 		PublicationInfo *pubinfo;
 		NamespaceInfo *nspinfo;
 
@@ -4445,6 +4474,7 @@ getPublicationNamespaces(Archive *fout)
 		pubsinfo[j].dobj.name = nspinfo->dobj.name;
 		pubsinfo[j].publication = pubinfo;
 		pubsinfo[j].pubschema = nspinfo;
+		pubsinfo[j].pubtype = pntype;
 
 		/* Decide whether we want to dump it */
 		selectDumpablePublicationObject(&(pubsinfo[j].dobj), fout);
@@ -4610,7 +4640,11 @@ dumpPublicationNamespace(Archive *fout, const PublicationSchemaInfo *pubsinfo)
 	query = createPQExpBuffer();
 
 	appendPQExpBuffer(query, "ALTER PUBLICATION %s ", fmtId(pubinfo->dobj.name));
-	appendPQExpBuffer(query, "ADD TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+
+	if (pubsinfo->pubtype == 't')
+		appendPQExpBuffer(query, "ADD TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+	else
+		appendPQExpBuffer(query, "ADD SEQUENCES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
 
 	/*
 	 * There is no point in creating drop query as the drop is done by schema
@@ -4643,6 +4677,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
 	TableInfo  *tbinfo = pubrinfo->pubtable;
 	PQExpBuffer query;
 	char	   *tag;
+	char	   *description;
 
 	/* Do nothing in data-only dump */
 	if (dopt->dataOnly)
@@ -4652,8 +4687,19 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
 
 	query = createPQExpBuffer();
 
-	appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
-					  fmtId(pubinfo->dobj.name));
+	if (tbinfo->relkind == RELKIND_SEQUENCE)
+	{
+		appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD SEQUENCE",
+						  fmtId(pubinfo->dobj.name));
+		description = "PUBLICATION SEQUENCE";
+	}
+	else
+	{
+		appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
+						  fmtId(pubinfo->dobj.name));
+		description = "PUBLICATION TABLE";
+	}
+
 	appendPQExpBuffer(query, " %s",
 					  fmtQualifiedDumpable(tbinfo));
 
@@ -4683,7 +4729,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
 					 ARCHIVE_OPTS(.tag = tag,
 								  .namespace = tbinfo->dobj.namespace->dobj.name,
 								  .owner = pubinfo->rolname,
-								  .description = "PUBLICATION TABLE",
+								  .description = description,
 								  .section = SECTION_POST_DATA,
 								  .createStmt = query->data));
 
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 865823868f..508fd51bf4 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,10 +619,12 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
 	bool		pubtruncate;
+	bool		pubsequence;
 	bool		pubviaroot;
 } PublicationInfo;
 
@@ -648,6 +650,7 @@ typedef struct _PublicationSchemaInfo
 	DumpableObject dobj;
 	PublicationInfo *publication;
 	NamespaceInfo *pubschema;
+	char		pubtype;
 } PublicationSchemaInfo;
 
 /*
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..b7989e5470 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2916,7 +2916,7 @@ my %tests = (
 		create_order => 50,
 		create_sql => 'CREATE PUBLICATION pub1;',
 		regexp => qr/^
-			\QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate');\E
+			\QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate, sequence');\E
 			/xm,
 		like => { %full_runs, section_post_data => 1, },
 	},
@@ -2936,7 +2936,7 @@ my %tests = (
 		create_order => 50,
 		create_sql => 'CREATE PUBLICATION pub3;',
 		regexp => qr/^
-			\QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate');\E
+			\QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate, sequence');\E
 			/xm,
 		like => { %full_runs, section_post_data => 1, },
 	},
@@ -2945,7 +2945,27 @@ my %tests = (
 		create_order => 50,
 		create_sql => 'CREATE PUBLICATION pub4;',
 		regexp => qr/^
-			\QCREATE PUBLICATION pub4 WITH (publish = 'insert, update, delete, truncate');\E
+			\QCREATE PUBLICATION pub4 WITH (publish = 'insert, update, delete, truncate, sequence');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 WITH (publish = 'insert, update, delete, truncate, sequence');\E
 			/xm,
 		like => { %full_runs, section_post_data => 1, },
 	},
@@ -3091,6 +3111,27 @@ my %tests = (
 		unlike => { exclude_dump_test_schema => 1, },
 	  },
 
+	'ALTER PUBLICATION pub3 ADD SEQUENCES IN SCHEMA dump_test' => {
+		create_order => 51,
+		create_sql =>
+		  'ALTER PUBLICATION pub3 ADD SEQUENCES IN SCHEMA dump_test;',
+		regexp => qr/^
+			\QALTER PUBLICATION pub3 ADD SEQUENCES IN SCHEMA dump_test;\E
+			/xm,
+		like   => { %full_runs, section_post_data => 1, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
+
+	'ALTER PUBLICATION pub3 ADD SEQUENCES IN SCHEMA public' => {
+		create_order => 52,
+		create_sql =>
+		  'ALTER PUBLICATION pub3 ADD SEQUENCES IN SCHEMA public;',
+		regexp => qr/^
+			\QALTER PUBLICATION pub3 ADD SEQUENCES IN SCHEMA public;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SCHEMA public' => {
 		regexp => qr/^CREATE SCHEMA public;/m,
 
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index f67bf0b892..61783ae0cb 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,63 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* print any publications */
+		if (pset.sversion >= 170000)
+		{
+			int			tuples = 0;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "		JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
+							  "		JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
+							  "WHERE pc.oid ='%s' and pn.pntype = 's' and pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "UNION\n"
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "		JOIN pg_catalog.pg_publication_rel pr ON p.oid = pr.prpubid\n"
+							  "WHERE pr.prrelid = '%s'\n"
+							  "UNION\n"
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid, oid, oid, oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+			else
+				tuples = PQntuples(result);
+
+			if (tuples > 0)
+				printTableAddFooter(&cont, _("Publications:"));
+
+			/* Might be an empty set - that's ok */
+			for (i = 0; i < tuples; i++)
+			{
+				printfPQExpBuffer(&buf, "    \"%s\"",
+								  PQgetvalue(result, i, 0));
+
+				printTableAddFooter(&cont, buf.data);
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2121,11 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -2979,7 +3052,7 @@ describeOneTableDetails(const char *schemaname,
 								  "FROM pg_catalog.pg_publication p\n"
 								  "     JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
 								  "     JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
-								  "WHERE pc.oid ='%s' and pg_catalog.pg_relation_is_publishable('%s')\n"
+								  "WHERE pc.oid ='%s' and pn.pntype = 't' and pg_catalog.pg_relation_is_publishable('%s')\n"
 								  "UNION\n"
 								  "SELECT pubname\n"
 								  "     , pg_get_expr(pr.prqual, c.oid)\n"
@@ -5076,7 +5149,7 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
 		int			i;
 
 		printfPQExpBuffer(&buf,
-						  "SELECT pubname \n"
+						  "SELECT pubname, (CASE WHEN pntype = 't' THEN 'tables' ELSE 'sequences' END) AS pubtype\n"
 						  "FROM pg_catalog.pg_publication p\n"
 						  "     JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
 						  "     JOIN pg_catalog.pg_namespace n ON n.oid = pn.pnnspid \n"
@@ -5102,8 +5175,9 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
 			/* Might be an empty set - that's ok */
 			for (i = 0; i < pub_schema_tuples; i++)
 			{
-				printfPQExpBuffer(&buf, "    \"%s\"",
-								  PQgetvalue(result, i, 0));
+				printfPQExpBuffer(&buf, "    \"%s\" (%s)",
+								  PQgetvalue(result, i, 0),
+								  PQgetvalue(result, i, 1));
 
 				footers[i + 1] = pg_strdup(buf.data);
 			}
@@ -6219,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6233,23 +6307,45 @@ listPublications(const char *pattern)
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT pubname AS \"%s\",\n"
-					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
-					  gettext_noop("Name"),
-					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
-					  gettext_noop("Inserts"),
-					  gettext_noop("Updates"),
-					  gettext_noop("Deletes"));
+	if (pset.sversion >= 170000)
+		printfPQExpBuffer(&buf,
+						  "SELECT pubname AS \"%s\",\n"
+						  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+						  "  puballtables AS \"%s\",\n"
+						  "  puballsequences AS \"%s\",\n"
+						  "  pubinsert AS \"%s\",\n"
+						  "  pubupdate AS \"%s\",\n"
+						  "  pubdelete AS \"%s\"",
+						  gettext_noop("Name"),
+						  gettext_noop("Owner"),
+						  gettext_noop("All tables"),
+						  gettext_noop("All sequences"),
+						  gettext_noop("Inserts"),
+						  gettext_noop("Updates"),
+						  gettext_noop("Deletes"));
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT pubname AS \"%s\",\n"
+						  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+						  "  puballtables AS \"%s\",\n"
+						  "  pubinsert AS \"%s\",\n"
+						  "  pubupdate AS \"%s\",\n"
+						  "  pubdelete AS \"%s\"",
+						  gettext_noop("Name"),
+						  gettext_noop("Owner"),
+						  gettext_noop("All tables"),
+						  gettext_noop("Inserts"),
+						  gettext_noop("Updates"),
+						  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
 						  gettext_noop("Truncates"));
+	if (pset.sversion >= 170000)
+		appendPQExpBuffer(&buf,
+						  ",\n  pubsequence AS \"%s\"",
+						  gettext_noop("Sequences"));
 	if (pset.sversion >= 130000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubviaroot AS \"%s\"",
@@ -6343,6 +6439,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6359,6 +6456,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 170000);
 
 	initPQExpBuffer(&buf);
 
@@ -6372,6 +6470,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences, pubsequence");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6417,6 +6519,7 @@ describePublications(const char *pattern)
 		char	   *pubid = PQgetvalue(res, i, 0);
 		char	   *pubname = PQgetvalue(res, i, 1);
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+		bool		puballsequences = strcmp(PQgetvalue(res, i, 9), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
 		if (has_pubtruncate)
@@ -6424,29 +6527,43 @@ describePublications(const char *pattern)
 		if (has_pubviaroot)
 			ncols++;
 
+		/* sequences have two extra columns (puballsequences, pubsequences) */
+		if (has_pubsequence)
+			ncols += 2;
+
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
 		printTableInit(&cont, &myopt, title.data, ncols, nrows);
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
 		if (has_pubtruncate)
 			printTableAddHeader(&cont, gettext_noop("Truncates"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("Sequences"), true, align);
 		if (has_pubviaroot)
 			printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
 
-		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);	/* owner */
+		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);	/* all tables */
+
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
+
+		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);	/* insert */
+		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);	/* update */
+		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);	/* delete */
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);	/* truncate */
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false); /* sequence */
 		if (has_pubviaroot)
-			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);	/* via root */
 
 		if (!puballtables)
 		{
@@ -6477,6 +6594,7 @@ describePublications(const char *pattern)
 							  "WHERE c.relnamespace = n.oid\n"
 							  "  AND c.oid = pr.prrelid\n"
 							  "  AND pr.prpubid = '%s'\n"
+							  "  AND c.relkind != 'S'\n"	/* exclude sequences */
 							  "ORDER BY 1,2", pubid);
 			if (!addFooterToPublicationDesc(&buf, _("Tables:"), false, &cont))
 				goto error_return;
@@ -6488,7 +6606,7 @@ describePublications(const char *pattern)
 								  "SELECT n.nspname\n"
 								  "FROM pg_catalog.pg_namespace n\n"
 								  "     JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
-								  "WHERE pn.pnpubid = '%s'\n"
+								  "WHERE pn.pnpubid = '%s' AND pn.pntype = 't'\n"
 								  "ORDER BY 1", pubid);
 				if (!addFooterToPublicationDesc(&buf, _("Tables from schemas:"),
 												true, &cont))
@@ -6496,6 +6614,37 @@ describePublications(const char *pattern)
 			}
 		}
 
+		if (!puballsequences)
+		{
+			/* Get the sequences for the specified publication */
+			printfPQExpBuffer(&buf,
+							  "SELECT n.nspname, c.relname, NULL, NULL\n"
+							  "FROM pg_catalog.pg_class c,\n"
+							  "     pg_catalog.pg_namespace n,\n"
+							  "     pg_catalog.pg_publication_rel pr\n"
+							  "WHERE c.relnamespace = n.oid\n"
+							  "  AND c.oid = pr.prrelid\n"
+							  "  AND pr.prpubid = '%s'\n"
+							  "  AND c.relkind = 'S'\n" /* only sequences */
+							  "ORDER BY 1,2", pubid);
+			if (!addFooterToPublicationDesc(&buf, "Sequences:", false, &cont))
+				goto error_return;
+
+			if (pset.sversion >= 150000)
+			{
+				/* Get the schemas for the specified publication */
+				printfPQExpBuffer(&buf,
+								  "SELECT n.nspname\n"
+								  "FROM pg_catalog.pg_namespace n\n"
+								  "     JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
+								  "WHERE pn.pnpubid = '%s' AND pn.pntype = 's'\n"
+								  "ORDER BY 1", pubid);
+				if (!addFooterToPublicationDesc(&buf, "Sequences from schemas:",
+												true, &cont))
+					goto error_return;
+			}
+		}
+
 		printTable(&cont, pset.queryFout, false, pset.logfile);
 		printTableCleanup(&cont);
 
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index d453e224d9..2ac6ff6c4d 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1893,11 +1893,15 @@ psql_completion(const char *text, int start, int end)
 		COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
 	/* ALTER PUBLICATION <name> ADD */
 	else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
-		COMPLETE_WITH("TABLES IN SCHEMA", "TABLE");
+		COMPLETE_WITH("TABLES IN SCHEMA", "TABLE", "SEQUENCES IN SCHEMA", "SEQUENCE");
 	else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
 			 (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
 			  ends_with(prev_wd, ',')))
 		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
+	else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") ||
+			 (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") &&
+			  ends_with(prev_wd, ',')))
+		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_sequences);
 
 	/*
 	 * "ALTER PUBLICATION <name> SET TABLE <name> WHERE (" - complete with
@@ -1917,11 +1921,11 @@ psql_completion(const char *text, int start, int end)
 		COMPLETE_WITH(",");
 	/* ALTER PUBLICATION <name> DROP */
 	else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
-		COMPLETE_WITH("TABLES IN SCHEMA", "TABLE");
+		COMPLETE_WITH("TABLES IN SCHEMA", "TABLE", "SEQUENCES IN SCHEMA", "SEQUENCE");
 	/* ALTER PUBLICATION <name> SET */
 	else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
-		COMPLETE_WITH("(", "TABLES IN SCHEMA", "TABLE");
-	else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "TABLES", "IN", "SCHEMA"))
+		COMPLETE_WITH("(", "TABLES IN SCHEMA", "TABLE", "SEQUENCES IN SCHEMA", "SEQUENCE");
+	else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "TABLES|SEQUENCES", "IN", "SCHEMA"))
 		COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
 								 " AND nspname NOT LIKE E'pg\\\\_%%'",
 								 "CURRENT_SCHEMA");
@@ -1933,6 +1937,10 @@ psql_completion(const char *text, int start, int end)
 		COMPLETE_WITH("CONNECTION", "ENABLE", "DISABLE", "OWNER TO",
 					  "RENAME TO", "REFRESH PUBLICATION", "SET", "SKIP (",
 					  "ADD PUBLICATION", "DROP PUBLICATION");
+	/* ALTER SUBSCRIPTION <name> REFRESH */
+	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
+			 TailMatches("REFRESH"))
+		COMPLETE_WITH("PUBLICATION", "SEQUENCES");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
@@ -3159,24 +3167,31 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "FOR SEQUENCE", "FOR ALL SEQUENCES", "FOR SEQUENCES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA", "SEQUENCE", "ALL SEQUENCES", "SEQUENCES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("IN SCHEMA");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLE", MatchAny) && !ends_with(prev_wd, ','))
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLE|SEQUENCE", MatchAny) && !ends_with(prev_wd, ','))
 		COMPLETE_WITH("WHERE (", "WITH (");
 	/* Complete "CREATE PUBLICATION <name> FOR TABLE" with "<table>, ..." */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLE"))
 		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
 
 	/*
-	 * "CREATE PUBLICATION <name> FOR TABLE <name> WHERE (" - complete with
-	 * table attributes
+	 * Complete "CREATE PUBLICATION <name> FOR SEQUENCE" with "<sequence>,
+	 * ..."
+	 */
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "SEQUENCE"))
+		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_sequences);
+
+	/*
+	 * "CREATE PUBLICATION <name> FOR TABLE|SEQUENCE <name> WHERE (" -
+	 * complete with table attributes
 	 */
 	else if (HeadMatches("CREATE", "PUBLICATION", MatchAny) && TailMatches("WHERE"))
 		COMPLETE_WITH("(");
@@ -3188,11 +3203,11 @@ psql_completion(const char *text, int start, int end)
 	/*
 	 * Complete "CREATE PUBLICATION <name> FOR TABLES IN SCHEMA <schema>, ..."
 	 */
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES", "IN", "SCHEMA"))
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES|SEQUENCES", "IN", "SCHEMA"))
 		COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
 								 " AND nspname NOT LIKE E'pg\\\\_%%'",
 								 "CURRENT_SCHEMA");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES", "IN", "SCHEMA", MatchAny) && (!ends_with(prev_wd, ',')))
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES|SEQUENCES", "IN", "SCHEMA", MatchAny) && (!ends_with(prev_wd, ',')))
 		COMPLETE_WITH("WITH (");
 	/* Complete "CREATE PUBLICATION <name> [...] WITH" */
 	else if (HeadMatches("CREATE", "PUBLICATION") && TailMatches("WITH", "("))
diff --git a/src/include/catalog/Makefile b/src/include/catalog/Makefile
index 167f91a6e3..5878810887 100644
--- a/src/include/catalog/Makefile
+++ b/src/include/catalog/Makefile
@@ -81,7 +81,8 @@ CATALOG_HEADERS := \
 	pg_publication_namespace.h \
 	pg_publication_rel.h \
 	pg_subscription.h \
-	pg_subscription_rel.h
+	pg_subscription_rel.h \
+	pg_subscription_seq.h
 
 GENERATED_HEADERS := $(CATALOG_HEADERS:%.h=%_d.h)
 
diff --git a/src/include/catalog/meson.build b/src/include/catalog/meson.build
index f70d1daba5..9330bd698a 100644
--- a/src/include/catalog/meson.build
+++ b/src/include/catalog/meson.build
@@ -69,6 +69,7 @@ catalog_headers = [
   'pg_publication_rel.h',
   'pg_subscription.h',
   'pg_subscription_rel.h',
+  'pg_subscription_seq.h',
 ]
 
 # The .dat files we need can just be listed alphabetically.
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 6a5476d3c4..8e68adb01f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
@@ -11937,6 +11945,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..ec42386bc4 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	/* true if truncates are published */
 	bool		pubtruncate;
 
+	/* true if sequences are published */
+	bool		pubsequence;
+
 	/* true if partition changes are published using root schema */
 	bool		pubviaroot;
 } FormData_pg_publication;
@@ -75,6 +84,7 @@ typedef struct PublicationActions
 	bool		pubupdate;
 	bool		pubdelete;
 	bool		pubtruncate;
+	bool		pubsequence;
 } PublicationActions;
 
 typedef struct PublicationDesc
@@ -102,6 +112,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -133,14 +144,15 @@ typedef enum PublicationPartOpt
 	PUBLICATION_PART_ALL,
 } PublicationPartOpt;
 
-extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
+extern List *GetPublicationRelations(Oid pubid, char objectType,
+									 PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
-extern List *GetPublicationSchemas(Oid pubid);
-extern List *GetSchemaPublications(Oid schemaid);
-extern List *GetSchemaPublicationRelations(Oid schemaid,
+extern List *GetPublicationSchemas(Oid pubid, char objectType);
+extern List *GetSchemaPublications(Oid schemaid, char objectType);
+extern List *GetSchemaPublicationRelations(Oid schemaid, char objectType,
 										   PublicationPartOpt pub_partopt);
-extern List *GetAllSchemaPublicationRelations(Oid pubid,
+extern List *GetAllSchemaPublicationRelations(Oid puboid, char objectType,
 											  PublicationPartOpt pub_partopt);
 extern List *GetPubPartitionOptionRelations(List *result,
 											PublicationPartOpt pub_partopt,
@@ -148,11 +160,15 @@ extern List *GetPubPartitionOptionRelations(List *result,
 extern Oid	GetTopMostAncestorInPublication(Oid puboid, List *ancestors,
 											int *ancestor_level);
 
+extern List *GetAllSequencesPublications(void);
+extern List *GetAllSequencesPublicationRelations(void);
+
 extern bool is_publishable_relation(Relation rel);
 extern bool is_schema_publication(Oid pubid);
 extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *pri,
 											  bool if_not_exists);
 extern ObjectAddress publication_add_schema(Oid pubid, Oid schemaid,
+											char objectType,
 											bool if_not_exists);
 
 extern Bitmapset *pub_collist_to_bitmapset(Bitmapset *columns, Datum pubcols,
diff --git a/src/include/catalog/pg_publication_namespace.h b/src/include/catalog/pg_publication_namespace.h
index 1cfb557684..b98659ac79 100644
--- a/src/include/catalog/pg_publication_namespace.h
+++ b/src/include/catalog/pg_publication_namespace.h
@@ -32,6 +32,7 @@ CATALOG(pg_publication_namespace,6237,PublicationNamespaceRelationId)
 	Oid			oid;			/* oid */
 	Oid			pnpubid BKI_LOOKUP(pg_publication); /* Oid of the publication */
 	Oid			pnnspid BKI_LOOKUP(pg_namespace);	/* Oid of the schema */
+	char		pntype;			/* object type to include */
 } FormData_pg_publication_namespace;
 
 /* ----------------
@@ -42,9 +43,16 @@ CATALOG(pg_publication_namespace,6237,PublicationNamespaceRelationId)
 typedef FormData_pg_publication_namespace *Form_pg_publication_namespace;
 
 DECLARE_UNIQUE_INDEX_PKEY(pg_publication_namespace_oid_index, 6238, PublicationNamespaceObjectIndexId, pg_publication_namespace, btree(oid oid_ops));
-DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_index, 6239, PublicationNamespacePnnspidPnpubidIndexId, pg_publication_namespace, btree(pnnspid oid_ops, pnpubid oid_ops));
+DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_pntype_index, 8903, PublicationNamespacePnnspidPnpubidPntypeIndexId, pg_publication_namespace, btree(pnnspid oid_ops, pnpubid oid_ops, pntype char_ops));
+
+/* object type to include from a schema, maps to relkind */
+#define		PUB_OBJTYPE_TABLE			't' /* table (regular or partitioned) */
+#define		PUB_OBJTYPE_SEQUENCE		's' /* sequence object */
+#define		PUB_OBJTYPE_UNSUPPORTED		'u' /* used for non-replicated types */
+
+extern char pub_get_object_type_for_relkind(char relkind);
 
 MAKE_SYSCACHE(PUBLICATIONNAMESPACE, pg_publication_namespace_oid_index, 64);
-MAKE_SYSCACHE(PUBLICATIONNAMESPACEMAP, pg_publication_namespace_pnnspid_pnpubid_index, 64);
+MAKE_SYSCACHE(PUBLICATIONNAMESPACEMAP, pg_publication_namespace_pnnspid_pnpubid_pntype_index, 64);
 
 #endif							/* PG_PUBLICATION_NAMESPACE_H */
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 0aa14ec4a2..eb9ee15127 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -98,6 +98,9 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId) BKI_SHARED_RELATION BKI_ROW
 								 * slots) in the upstream database are enabled
 								 * to be synchronized to the standbys. */
 
+	bool		subsequences;	/* True if sequences should be requested from
+								 * the publisher */
+
 #ifdef CATALOG_VARLEN			/* variable-length fields start here */
 	/* Connection string to the publisher */
 	text		subconninfo BKI_FORCE_NOT_NULL;
@@ -151,6 +154,7 @@ typedef struct Subscription
 								 * (i.e. the main slot and the table sync
 								 * slots) in the upstream database are enabled
 								 * to be synchronized to the standbys. */
+	bool        sequences;      /* Request sequences from the publisher */
 	char	   *conninfo;		/* Connection string to the publisher */
 	char	   *slotname;		/* Name of the replication slot */
 	char	   *synccommit;		/* Synchronous commit setting for worker */
diff --git a/src/include/catalog/pg_subscription_seq.h b/src/include/catalog/pg_subscription_seq.h
new file mode 100644
index 0000000000..d0c03ef58a
--- /dev/null
+++ b/src/include/catalog/pg_subscription_seq.h
@@ -0,0 +1,67 @@
+/* -------------------------------------------------------------------------
+ *
+ * pg_subscription_rel.h
+ *	  definition of the system catalog containing the state for each
+ *	  replicated sequence in each subscription (pg_subscription_rel)
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/catalog/pg_subscription_seq.h
+ *
+ * NOTES
+ *	  The Catalog.pm module reads this file and derives schema
+ *	  information.
+ *
+ * -------------------------------------------------------------------------
+ */
+#ifndef PG_SUBSCRIPTION_SEQ_H
+#define PG_SUBSCRIPTION_SEQ_H
+
+#include "access/xlogdefs.h"
+#include "catalog/genbki.h"
+#include "catalog/pg_subscription_seq_d.h"
+#include "nodes/pg_list.h"
+#include "replication/walreceiver.h"
+
+/* ----------------
+ *		pg_subscription_seq definition. cpp turns this into
+ *		typedef struct FormData_pg_subscription_seq
+ * ----------------
+ */
+CATALOG(pg_subscription_seq,8001,SubscriptionSeqRelationId)
+{
+	Oid			sssubid BKI_LOOKUP(pg_subscription);	/* Oid of subscription */
+	Oid			ssseqid BKI_LOOKUP(pg_class);	/* Oid of relation */
+
+	/*
+	 * Although sssublsn is a fixed-width type, it is allowed to be NULL, so
+	 * we prevent direct C code access to it just as for a varlena field.
+	 */
+#ifdef CATALOG_VARLEN			/* variable-length fields start here */
+
+	XLogRecPtr	sssublsn BKI_FORCE_NULL;	/* remote LSN of the state change
+											 * used for synchronization
+											 * coordination, or NULL if not
+											 * valid */
+#endif
+} FormData_pg_subscription_seq;
+
+typedef FormData_pg_subscription_seq *Form_pg_subscription_seq;
+
+DECLARE_UNIQUE_INDEX_PKEY(pg_subscription_seq_ssseqid_sssubid_index, 8002, SubscriptionSeqSsseqidSssubidIndexId, pg_subscription_seq, btree(ssseqid oid_ops, sssubid oid_ops));
+
+MAKE_SYSCACHE(SUBSCRIPTIONSEQMAP, pg_subscription_seq_ssseqid_sssubid_index, 64);
+
+typedef struct SubscriptionSeqInfo
+{
+	Oid			seqid;
+	XLogRecPtr	lsn;
+} SubscriptionSeqInfo;
+
+extern List *GetSubscriptionSequences(Oid subid);
+extern List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
+extern void copy_subscription_sequences(WalReceiverConn *conn, Oid subid,
+										List *sequences);
+
+#endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..fad731a733 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid seq_relid, int64 value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index ddfed02db2..afb994ae45 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4150,6 +4150,10 @@ typedef enum PublicationObjSpecType
 	PUBLICATIONOBJ_TABLES_IN_SCHEMA,	/* All tables in schema */
 	PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA,	/* All tables in first element of
 											 * search_path */
+	PUBLICATIONOBJ_SEQUENCE,	/* Sequence type */
+	PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+	PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+											 * search_path */
 	PUBLICATIONOBJ_CONTINUATION,	/* Continuation of previous type */
 } PublicationObjSpecType;
 
@@ -4168,7 +4172,8 @@ typedef struct CreatePublicationStmt
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
@@ -4191,7 +4196,8 @@ typedef struct AlterPublicationStmt
 	 * objects.
 	 */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 	AlterPublicationAction action;	/* What action to perform with the given
 									 * objects */
 } AlterPublicationStmt;
@@ -4213,6 +4219,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/object_address.out b/src/test/regress/expected/object_address.out
index fc42d418bf..9aec2612d0 100644
--- a/src/test/regress/expected/object_address.out
+++ b/src/test/regress/expected/object_address.out
@@ -47,6 +47,7 @@ CREATE TRANSFORM FOR int LANGUAGE SQL (
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION addr_pub FOR TABLE addr_nsp.gentable;
 CREATE PUBLICATION addr_pub_schema FOR TABLES IN SCHEMA addr_nsp;
+CREATE PUBLICATION addr_pub_schema2 FOR SEQUENCES IN SCHEMA addr_nsp;
 RESET client_min_messages;
 CREATE SUBSCRIPTION regress_addr_sub CONNECTION '' PUBLICATION bar WITH (connect = false, slot_name = NONE);
 WARNING:  subscription was created, but is not connected
@@ -315,12 +316,12 @@ WARNING:  error for function of access method,{addr_nsp,zwei},{}: name list leng
 WARNING:  error for function of access method,{addr_nsp,zwei},{integer}: name list length must be at least 3
 WARNING:  error for function of access method,{eins,zwei,drei},{}: argument list length must be exactly 2
 WARNING:  error for function of access method,{eins,zwei,drei},{integer}: argument list length must be exactly 2
-WARNING:  error for publication namespace,{eins},{}: argument list length must be exactly 1
-WARNING:  error for publication namespace,{eins},{integer}: schema "eins" does not exist
-WARNING:  error for publication namespace,{addr_nsp,zwei},{}: name list length must be exactly 1
-WARNING:  error for publication namespace,{addr_nsp,zwei},{integer}: name list length must be exactly 1
-WARNING:  error for publication namespace,{eins,zwei,drei},{}: name list length must be exactly 1
-WARNING:  error for publication namespace,{eins,zwei,drei},{integer}: name list length must be exactly 1
+WARNING:  error for publication namespace,{eins},{}: argument list length must be exactly 2
+WARNING:  error for publication namespace,{eins},{integer}: argument list length must be exactly 2
+WARNING:  error for publication namespace,{addr_nsp,zwei},{}: argument list length must be exactly 2
+WARNING:  error for publication namespace,{addr_nsp,zwei},{integer}: argument list length must be exactly 2
+WARNING:  error for publication namespace,{eins,zwei,drei},{}: argument list length must be exactly 2
+WARNING:  error for publication namespace,{eins,zwei,drei},{integer}: argument list length must be exactly 2
 WARNING:  error for publication relation,{eins},{}: argument list length must be exactly 1
 WARNING:  error for publication relation,{eins},{integer}: relation "eins" does not exist
 WARNING:  error for publication relation,{addr_nsp,zwei},{}: argument list length must be exactly 1
@@ -441,7 +442,8 @@ WITH objects (type, name, args) AS (VALUES
     ('transform', '{int}', '{sql}'),
     ('access method', '{btree}', '{}'),
     ('publication', '{addr_pub}', '{}'),
-    ('publication namespace', '{addr_nsp}', '{addr_pub_schema}'),
+    ('publication namespace', '{addr_nsp}', '{addr_pub_schema, t}'),
+    ('publication namespace', '{addr_nsp}', '{addr_pub_schema2, s}'),
     ('publication relation', '{addr_nsp, gentable}', '{addr_pub}'),
     ('subscription', '{regress_addr_sub}', '{}'),
     ('statistics object', '{addr_nsp, gentable_stat}', '{}')
@@ -504,7 +506,8 @@ text search template|addr_nsp|addr_ts_temp|addr_nsp.addr_ts_temp|t
 subscription|NULL|regress_addr_sub|regress_addr_sub|t
 publication|NULL|addr_pub|addr_pub|t
 publication relation|NULL|NULL|addr_nsp.gentable in publication addr_pub|t
-publication namespace|NULL|NULL|addr_nsp in publication addr_pub_schema|t
+publication namespace|NULL|NULL|addr_nsp in publication addr_pub_schema type t|t
+publication namespace|NULL|NULL|addr_nsp in publication addr_pub_schema2 type s|t
 ---
 --- Cleanup resources
 ---
@@ -516,6 +519,7 @@ drop cascades to server integer
 drop cascades to user mapping for regress_addr_user on server integer
 DROP PUBLICATION addr_pub;
 DROP PUBLICATION addr_pub_schema;
+DROP PUBLICATION addr_pub_schema2;
 DROP SUBSCRIPTION regress_addr_sub;
 DROP SCHEMA addr_nsp CASCADE;
 NOTICE:  drop cascades to 14 other objects
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index 215eb899be..98c48e19bb 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -266,3 +266,5 @@ NOTICE:  checking pg_subscription {subdbid} => pg_database {oid}
 NOTICE:  checking pg_subscription {subowner} => pg_authid {oid}
 NOTICE:  checking pg_subscription_rel {srsubid} => pg_subscription {oid}
 NOTICE:  checking pg_subscription_rel {srrelid} => pg_class {oid}
+NOTICE:  checking pg_subscription_seq {sssubid} => pg_subscription {oid}
+NOTICE:  checking pg_subscription_seq {ssseqid} => pg_class {oid}
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..d26156a2f1 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                            List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6052321911 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                            List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f         | f
 (2 rows)
 
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                            List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | t         | f
 (2 rows)
 
 --- adding tables
@@ -61,6 +61,9 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
 DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+-- fail - can't add a table using ADD SEQUENCE command
+ALTER PUBLICATION testpub_foralltables ADD SEQUENCE testpub_tbl2;
+ERROR:  object type does not match type expected by command
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -87,10 +90,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                              Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +102,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                              Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                              Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +126,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                           Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +147,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +159,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +173,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +198,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                            Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +213,525 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                                  Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                                  Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+ERROR:  publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL:  Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+ERROR:  publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL:  Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+ERROR:  publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL:  Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCES IN SCHEMA pub_test;
+ERROR:  publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL:  Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCES IN SCHEMA pub_test;
+ERROR:  publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL:  Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCES IN SCHEMA pub_test;
+ERROR:  publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL:  Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+                                            Publication testpub_forsequence
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Sequences:
+    "public.testpub_seq0"
+Sequences from schemas:
+    "pub_test"
+
+-- add sequence from the schema we already added
+ALTER PUBLICATION testpub_forsequence ADD SEQUENCE pub_test.testpub_seq1;
+-- fail - can't add sequence using ADD TABLE command
+ALTER PUBLICATION testpub_forsequence ADD TABLE pub_test.testpub_seq1;
+ERROR:  object type does not match type expected by command
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+                                            Publication testpub_forsequence
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Sequences:
+    "pub_test.testpub_seq1"
+    "public.testpub_seq0"
+
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+                                            Publication testpub_forsequence
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Sequences from schemas:
+    "pub_test"
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+                                             Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Sequences:
+    "pub_test.testpub_seq1"
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+         pubname         | puballtables | puballsequences 
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f            | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+                       Sequence "pub_test.testpub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_forallsequences"
+    "testpub_forschema"
+    "testpub_forsequence"
+
+\dRp+ testpub_forallsequences
+                                          Publication testpub_forallsequences
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | t             | t       | f       | f       | f         | t         | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+-- publication testing multiple sequences at the same time
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE testpub_seq2;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_multi FOR SEQUENCE testpub_seq1, testpub_seq2;
+RESET client_min_messages;
+\dRp+ testpub_multi
+                                               Publication testpub_multi
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Sequences:
+    "public.testpub_seq1"
+    "public.testpub_seq2"
+
+DROP PUBLICATION testpub_multi;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE testpub_seq2;
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+                                                Publication testpub_mix
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Tables:
+    "public.testpub_tbl1"
+Sequences:
+    "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix ADD SEQUENCES IN SCHEMA pub_test, TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+                                                Publication testpub_mix
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Tables:
+    "public.testpub_tbl1"
+Tables from schemas:
+    "pub_test"
+Sequences:
+    "public.testpub_seq1"
+Sequences from schemas:
+    "pub_test"
+
+ALTER PUBLICATION testpub_mix DROP SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+                                                Publication testpub_mix
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Tables:
+    "public.testpub_tbl1"
+Tables from schemas:
+    "pub_test"
+Sequences:
+    "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix DROP TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+                                                Publication testpub_mix
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Tables:
+    "public.testpub_tbl1"
+Sequences:
+    "public.testpub_seq1"
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+                                              Publication testpub_schemas
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Tables from schemas:
+    "pub_test2"
+Sequences from schemas:
+    "pub_test1"
+
+\dn+ pub_test1
+                            List of schemas
+   Name    |          Owner           | Access privileges | Description 
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user |                   | 
+Publications:
+    "testpub_schemas" (sequences)
+
+\dn+ pub_test2
+                            List of schemas
+   Name    |          Owner           | Access privileges | Description 
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user |                   | 
+Publications:
+    "testpub_schemas" (tables)
+
+\d+ pub_test1.test_seq1;
+                        Sequence "pub_test1.test_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+                               Table "pub_test1.test_tbl1"
+ Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description 
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a      | integer |           | not null |         | plain   |              | 
+ b      | integer |           |          |         | plain   |              | 
+Indexes:
+    "test_tbl1_pkey" PRIMARY KEY, btree (a)
+
+\d+ pub_test2.test_seq2;
+                        Sequence "pub_test2.test_seq2"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+
+\d+ pub_test2.test_tbl2;
+                               Table "pub_test2.test_tbl2"
+ Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description 
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a      | integer |           | not null |         | plain   |              | 
+ b      | integer |           |          |         | plain   |              | 
+Indexes:
+    "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+    "testpub_schemas"
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCES IN SCHEMA pub_test2;
+\dRp+ testpub_schemas
+                                              Publication testpub_schemas
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Tables from schemas:
+    "pub_test1"
+    "pub_test2"
+Sequences from schemas:
+    "pub_test1"
+    "pub_test2"
+
+\dn+ pub_test1
+                            List of schemas
+   Name    |          Owner           | Access privileges | Description 
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user |                   | 
+Publications:
+    "testpub_schemas" (sequences)
+    "testpub_schemas" (tables)
+
+\dn+ pub_test2
+                            List of schemas
+   Name    |          Owner           | Access privileges | Description 
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user |                   | 
+Publications:
+    "testpub_schemas" (tables)
+    "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+                        Sequence "pub_test1.test_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+                               Table "pub_test1.test_tbl1"
+ Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description 
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a      | integer |           | not null |         | plain   |              | 
+ b      | integer |           |          |         | plain   |              | 
+Indexes:
+    "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+    "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+                        Sequence "pub_test2.test_seq2"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+                               Table "pub_test2.test_tbl2"
+ Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description 
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a      | integer |           | not null |         | plain   |              | 
+ b      | integer |           |          |         | plain   |              | 
+Indexes:
+    "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+    "testpub_schemas"
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+                                              Publication testpub_schemas
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Tables from schemas:
+    "pub_test1"
+Sequences from schemas:
+    "pub_test2"
+
+\dn+ pub_test1
+                            List of schemas
+   Name    |          Owner           | Access privileges | Description 
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user |                   | 
+Publications:
+    "testpub_schemas" (tables)
+
+\dn+ pub_test2
+                            List of schemas
+   Name    |          Owner           | Access privileges | Description 
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user |                   | 
+Publications:
+    "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+                        Sequence "pub_test1.test_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+
+\d+ pub_test1.test_tbl1;
+                               Table "pub_test1.test_tbl1"
+ Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description 
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a      | integer |           | not null |         | plain   |              | 
+ b      | integer |           |          |         | plain   |              | 
+Indexes:
+    "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+    "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+                        Sequence "pub_test2.test_seq2"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+                               Table "pub_test2.test_tbl2"
+ Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description 
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a      | integer |           | not null |         | plain   |              | 
+ b      | integer |           |          |         | plain   |              | 
+Indexes:
+    "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+-- add a different schema (not including the already published sequences)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+                                              Publication testpub_schemas
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Tables:
+    "pub_test2.test_tbl2"
+Tables from schemas:
+    "pub_test1"
+Sequences:
+    "pub_test1.test_seq1"
+Sequences from schemas:
+    "pub_test2"
+
+\d+ pub_test1.test_seq1;
+                        Sequence "pub_test1.test_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+                               Table "pub_test1.test_tbl1"
+ Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description 
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a      | integer |           | not null |         | plain   |              | 
+ b      | integer |           |          |         | plain   |              | 
+Indexes:
+    "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+    "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+                        Sequence "pub_test2.test_seq2"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+                               Table "pub_test2.test_tbl2"
+ Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description 
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a      | integer |           | not null |         | plain   |              | 
+ b      | integer |           |          |         | plain   |              | 
+Indexes:
+    "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+    "testpub_schemas"
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+                                              Publication testpub_schemas
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
+Tables from schemas:
+    "pub_test1"
+Sequences from schemas:
+    "pub_test2"
+
+\d+ pub_test1.test_seq1;
+                        Sequence "pub_test1.test_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+
+\d+ pub_test1.test_tbl1;
+                               Table "pub_test1.test_tbl1"
+ Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description 
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a      | integer |           | not null |         | plain   |              | 
+ b      | integer |           |          |         | plain   |              | 
+Indexes:
+    "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+    "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+                        Sequence "pub_test2.test_seq2"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+                               Table "pub_test2.test_tbl2"
+ Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description 
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a      | integer |           | not null |         | plain   |              | 
+ b      | integer |           |          |         | plain   |              | 
+Indexes:
+    "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +747,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +765,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                             Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +797,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                                  Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +813,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                                  Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +832,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                                  Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +843,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                                  Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +879,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                              Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +892,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                              Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +1010,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                                  Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +1227,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                             Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +1414,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1622,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                               Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1663,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                              Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1744,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                              Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | t         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1757,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                         List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | t         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                           List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | t         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1786,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1812,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                              Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1883,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1894,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1915,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1927,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1939,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1950,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1961,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1972,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +2003,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +2015,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +2097,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                             Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +2118,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1671,40 +2175,85 @@ CREATE SCHEMA sch1;
 CREATE SCHEMA sch2;
 CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
 CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
 -- Schema publication that does not include the schema that has the parent table
 CREATE PUBLICATION pub FOR TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCES IN SCHEMA sch2;
 SELECT * FROM pg_publication_tables;
  pubname | schemaname | tablename  | attnames | rowfilter 
 ---------+------------+------------+----------+-----------
  pub     | sch2       | tbl1_part1 | {a}      | 
 (1 row)
 
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+ pub     | sch2       | seq2
+(1 row)
+
 DROP PUBLICATION pub;
 -- Table publication that does not include the parent table
 CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
 SELECT * FROM pg_publication_tables;
  pubname | schemaname | tablename  | attnames | rowfilter 
 ---------+------------+------------+----------+-----------
  pub     | sch2       | tbl1_part1 | {a}      | 
 (1 row)
 
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+ pub     | sch2       | seq2
+(1 row)
+
 -- Table publication that includes both the parent table and the child table
 ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
 SELECT * FROM pg_publication_tables;
  pubname | schemaname | tablename | attnames | rowfilter 
 ---------+------------+-----------+----------+-----------
  pub     | sch1       | tbl1      | {a}      | 
 (1 row)
 
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+ pub     | sch1       | seq1
+ pub     | sch2       | seq2
+(2 rows)
+
 DROP PUBLICATION pub;
 -- Schema publication that does not include the schema that has the parent table
 CREATE PUBLICATION pub FOR TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
 SELECT * FROM pg_publication_tables;
  pubname | schemaname | tablename  | attnames | rowfilter 
 ---------+------------+------------+----------+-----------
  pub     | sch2       | tbl1_part1 | {a}      | 
 (1 row)
 
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+ pub     | sch1       | seq1
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename | attnames | rowfilter 
+---------+------------+-----------+----------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+ pub     | sch2       | seq2
+(1 row)
+
 DROP PUBLICATION pub;
 -- Table publication that does not include the parent table
 CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
@@ -1714,14 +2263,26 @@ SELECT * FROM pg_publication_tables;
  pub     | sch2       | tbl1_part1 | {a}      | 
 (1 row)
 
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+(0 rows)
+
 -- Table publication that includes both the parent table and the child table
 ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCES IN SCHEMA sch2;
 SELECT * FROM pg_publication_tables;
  pubname | schemaname | tablename  | attnames | rowfilter 
 ---------+------------+------------+----------+-----------
  pub     | sch2       | tbl1_part1 | {a}      | 
 (1 row)
 
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+ pub     | sch2       | seq2
+(1 row)
+
 DROP PUBLICATION pub;
 DROP TABLE sch2.tbl1_part1;
 DROP TABLE sch1.tbl1;
@@ -1737,9 +2298,81 @@ SELECT * FROM pg_publication_tables;
  pub     | sch1       | tbl1      | {a}      | 
 (1 row)
 
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+(0 rows)
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename | attnames | rowfilter 
+---------+------------+-----------+----------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+ pub     | sch2       | seq2
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename | attnames | rowfilter 
+---------+------------+-----------+----------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+ pub     | sch2       | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename | attnames | rowfilter 
+---------+------------+-----------+----------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+ pub     | sch1       | seq1
+ pub     | sch2       | seq2
+(2 rows)
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename | attnames | rowfilter 
+---------+------------+-----------+----------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+ pub     | sch2       | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename | attnames | rowfilter 
+---------+------------+-----------+----------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename 
+---------+------------+--------------
+ pub     | sch1       | seq1
+ pub     | sch2       | seq2
+(2 rows)
+
 RESET client_min_messages;
 DROP PUBLICATION pub;
 DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
 DROP SCHEMA sch1 cascade;
 DROP SCHEMA sch2 cascade;
 RESET SESSION AUTHORIZATION;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index ef658ad740..4acb94ce96 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1441,6 +1441,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/sql/object_address.sql b/src/test/regress/sql/object_address.sql
index 1a6c61f49d..5c6cdeb05d 100644
--- a/src/test/regress/sql/object_address.sql
+++ b/src/test/regress/sql/object_address.sql
@@ -50,6 +50,7 @@ CREATE TRANSFORM FOR int LANGUAGE SQL (
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION addr_pub FOR TABLE addr_nsp.gentable;
 CREATE PUBLICATION addr_pub_schema FOR TABLES IN SCHEMA addr_nsp;
+CREATE PUBLICATION addr_pub_schema2 FOR SEQUENCES IN SCHEMA addr_nsp;
 RESET client_min_messages;
 CREATE SUBSCRIPTION regress_addr_sub CONNECTION '' PUBLICATION bar WITH (connect = false, slot_name = NONE);
 CREATE STATISTICS addr_nsp.gentable_stat ON a, b FROM addr_nsp.gentable;
@@ -206,7 +207,8 @@ WITH objects (type, name, args) AS (VALUES
     ('transform', '{int}', '{sql}'),
     ('access method', '{btree}', '{}'),
     ('publication', '{addr_pub}', '{}'),
-    ('publication namespace', '{addr_nsp}', '{addr_pub_schema}'),
+    ('publication namespace', '{addr_nsp}', '{addr_pub_schema, t}'),
+    ('publication namespace', '{addr_nsp}', '{addr_pub_schema2, s}'),
     ('publication relation', '{addr_nsp, gentable}', '{addr_pub}'),
     ('subscription', '{regress_addr_sub}', '{}'),
     ('statistics object', '{addr_nsp, gentable_stat}', '{}')
@@ -227,6 +229,7 @@ ORDER BY addr1.classid, addr1.objid, addr1.objsubid;
 DROP FOREIGN DATA WRAPPER addr_fdw CASCADE;
 DROP PUBLICATION addr_pub;
 DROP PUBLICATION addr_pub_schema;
+DROP PUBLICATION addr_pub_schema2;
 DROP SUBSCRIPTION regress_addr_sub;
 
 DROP SCHEMA addr_nsp CASCADE;
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..cfa030b812 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -27,7 +27,7 @@ CREATE PUBLICATION testpub_xxx WITH (publish_via_partition_root = 'true', publis
 
 \dRp
 
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
 
 \dRp
 
@@ -46,6 +46,8 @@ ALTER PUBLICATION testpub_foralltables SET (publish = 'insert, update');
 CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
+-- fail - can't add a table using ADD SEQUENCE command
+ALTER PUBLICATION testpub_foralltables ADD SEQUENCE testpub_tbl2;
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 -- fail - can't add to for all tables publication
@@ -117,6 +119,188 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCES IN SCHEMA pub_test;
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCES IN SCHEMA pub_test;
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCES IN SCHEMA pub_test;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- add sequence from the schema we already added
+ALTER PUBLICATION testpub_forsequence ADD SEQUENCE pub_test.testpub_seq1;
+-- fail - can't add sequence using ADD TABLE command
+ALTER PUBLICATION testpub_forsequence ADD TABLE pub_test.testpub_seq1;
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+
+
+-- publication testing multiple sequences at the same time
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE testpub_seq2;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_multi FOR SEQUENCE testpub_seq1, testpub_seq2;
+RESET client_min_messages;
+
+\dRp+ testpub_multi
+
+DROP PUBLICATION testpub_multi;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE testpub_seq2;
+
+
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix ADD SEQUENCES IN SCHEMA pub_test, TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+
+
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCES IN SCHEMA pub_test2;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- add a different schema (not including the already published sequences)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -1061,32 +1245,51 @@ CREATE SCHEMA sch1;
 CREATE SCHEMA sch2;
 CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
 CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
 -- Schema publication that does not include the schema that has the parent table
 CREATE PUBLICATION pub FOR TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCES IN SCHEMA sch2;
 SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
 
 DROP PUBLICATION pub;
 -- Table publication that does not include the parent table
 CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
 SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
 
 -- Table publication that includes both the parent table and the child table
 ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
 SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
 
 DROP PUBLICATION pub;
 -- Schema publication that does not include the schema that has the parent table
 CREATE PUBLICATION pub FOR TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
 SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
 
 DROP PUBLICATION pub;
 -- Table publication that does not include the parent table
 CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
 SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
 
 -- Table publication that includes both the parent table and the child table
 ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCES IN SCHEMA sch2;
 SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
 
 DROP PUBLICATION pub;
 DROP TABLE sch2.tbl1_part1;
@@ -1099,10 +1302,36 @@ CREATE TABLE sch1.tbl1_part3 (a int) PARTITION BY RANGE(a);
 ALTER TABLE sch1.tbl1 ATTACH PARTITION sch1.tbl1_part3 FOR VALUES FROM (20) to (30);
 CREATE PUBLICATION pub FOR TABLES IN SCHEMA sch1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
 SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
 
 RESET client_min_messages;
 DROP PUBLICATION pub;
 DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
 DROP SCHEMA sch1 cascade;
 DROP SCHEMA sch2 cascade;
 
-- 
2.34.1

#25Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#23)
Re: Logical Replication of sequences

On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Jun 7, 2024 at 7:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Thu, Jun 6, 2024 at 6:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Jun 6, 2024 at 11:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

To achieve this, we can allow sequences to be copied during
the initial CREATE SUBSCRIPTION command similar to what we do for
tables. And then later by new/existing command, we re-copy the already
existing sequences on the subscriber.

The options for the new command could be:
Alter Subscription ... Refresh Sequences
Alter Subscription ... Replicate Sequences

In the second option, we need to introduce a new keyword Replicate.
Can you think of any better option?

Another idea is doing that using options. For example,

For initial sequences synchronization:

CREATE SUBSCRIPTION ... WITH (copy_sequence = true);

How will it interact with the existing copy_data option? So copy_data
will become equivalent to copy_table_data, right?

Right.

For re-copy (or update) sequences:

ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);

Similar to the previous point it can be slightly confusing w.r.t
copy_data. And would copy_sequence here mean that it would copy
sequence values of both pre-existing and newly added sequences, if so,
that would make it behave differently than copy_data? The other
possibility in this direction would be to introduce an option like
replicate_all_sequences/copy_all_sequences which indicates a copy of
both pre-existing and new sequences, if any.

Copying sequence data works differently than replicating table data
(initial data copy and logical replication). So I thought the
copy_sequence option (or whatever better name) always does both
updating pre-existing sequences and adding new sequences. REFRESH
PUBLICATION updates the tables to be subscribed, so we also update or
add sequences associated to these tables.

Are you imagining the behavior for sequences associated with tables
differently than the ones defined by the CREATE SEQUENCE .. command? I
was thinking that users would associate sequences with publications
similar to what we do for tables for both cases. For example, they
need to explicitly mention the sequences they want to replicate by
commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE
PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR
SEQUENCES IN SCHEMA sch1;

In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1
should copy both the explicitly defined sequences and sequences
defined with the tables. Do you think a different variant for just
copying sequences implicitly associated with tables (say for identity
columns)?

Oh, I was thinking that your proposal was to copy literally all
sequences by REPLICA/REFRESH SEQUENCE command. But it seems to make
sense to explicitly specify the sequences they want to replicate. It
also means that they can create a publication that has only sequences.
In this case, even if they create a subscription for that publication,
we don't launch any apply workers for that subscription. Right?

Also, given that the main use case (at least as the first step) is
version upgrade, do we really need to support SEQUENCES IN SCHEMA and
even FOR SEQUENCE? The WIP patch Vignesh recently submitted is more
than 6k lines. I think we can cut the scope for the first
implementation so as to make the review easy.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#26Amul Sul
sulamul@gmail.com
In reply to: vignesh C (#24)
Re: Logical Replication of sequences

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:
[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm
missing
something. How do you track sequence mapping with the publication?

Regards,
Amul

#27Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Masahiko Sawada (#25)
Re: Logical Replication of sequences

On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Jun 7, 2024 at 7:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Thu, Jun 6, 2024 at 6:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Jun 6, 2024 at 11:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Jun 5, 2024 at 9:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

To achieve this, we can allow sequences to be copied during
the initial CREATE SUBSCRIPTION command similar to what we do for
tables. And then later by new/existing command, we re-copy the already
existing sequences on the subscriber.

The options for the new command could be:
Alter Subscription ... Refresh Sequences
Alter Subscription ... Replicate Sequences

In the second option, we need to introduce a new keyword Replicate.
Can you think of any better option?

Another idea is doing that using options. For example,

For initial sequences synchronization:

CREATE SUBSCRIPTION ... WITH (copy_sequence = true);

How will it interact with the existing copy_data option? So copy_data
will become equivalent to copy_table_data, right?

Right.

For re-copy (or update) sequences:

ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_sequence = true);

Similar to the previous point it can be slightly confusing w.r.t
copy_data. And would copy_sequence here mean that it would copy
sequence values of both pre-existing and newly added sequences, if so,
that would make it behave differently than copy_data? The other
possibility in this direction would be to introduce an option like
replicate_all_sequences/copy_all_sequences which indicates a copy of
both pre-existing and new sequences, if any.

Copying sequence data works differently than replicating table data
(initial data copy and logical replication). So I thought the
copy_sequence option (or whatever better name) always does both
updating pre-existing sequences and adding new sequences. REFRESH
PUBLICATION updates the tables to be subscribed, so we also update or
add sequences associated to these tables.

Are you imagining the behavior for sequences associated with tables
differently than the ones defined by the CREATE SEQUENCE .. command? I
was thinking that users would associate sequences with publications
similar to what we do for tables for both cases. For example, they
need to explicitly mention the sequences they want to replicate by
commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE
PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR
SEQUENCES IN SCHEMA sch1;

In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1
should copy both the explicitly defined sequences and sequences
defined with the tables. Do you think a different variant for just
copying sequences implicitly associated with tables (say for identity
columns)?

Oh, I was thinking that your proposal was to copy literally all
sequences by REPLICA/REFRESH SEQUENCE command. But it seems to make
sense to explicitly specify the sequences they want to replicate. It
also means that they can create a publication that has only sequences.
In this case, even if they create a subscription for that publication,
we don't launch any apply workers for that subscription. Right?

Also, given that the main use case (at least as the first step) is
version upgrade, do we really need to support SEQUENCES IN SCHEMA and
even FOR SEQUENCE?

Also, I guess that specifying individual sequences might not be easy
to use for users in some cases. For sequences owned by a column of a
table, users might want to specify them altogether, rather than
separately. For example, CREATE PUBLICATION ... FOR TABLE tab1 WITH
SEQUENCES means to add the table tab1 and its sequences to the
publication. For other sequences (i.e., not owned by any tables),
users might want to specify them individually.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#28Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#27)
Re: Logical Replication of sequences

On Mon, Jun 10, 2024 at 12:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Are you imagining the behavior for sequences associated with tables
differently than the ones defined by the CREATE SEQUENCE .. command? I
was thinking that users would associate sequences with publications
similar to what we do for tables for both cases. For example, they
need to explicitly mention the sequences they want to replicate by
commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE
PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR
SEQUENCES IN SCHEMA sch1;

In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1
should copy both the explicitly defined sequences and sequences
defined with the tables. Do you think a different variant for just
copying sequences implicitly associated with tables (say for identity
columns)?

Oh, I was thinking that your proposal was to copy literally all
sequences by REPLICA/REFRESH SEQUENCE command.

I am trying to keep the behavior as close to tables as possible.

But it seems to make
sense to explicitly specify the sequences they want to replicate. It
also means that they can create a publication that has only sequences.
In this case, even if they create a subscription for that publication,
we don't launch any apply workers for that subscription. Right?

Right, good point. I had not thought about this.

Also, given that the main use case (at least as the first step) is
version upgrade, do we really need to support SEQUENCES IN SCHEMA and
even FOR SEQUENCE?

At the very least, we can split the patch to move these variants to a
separate patch. Once the main patch is finalized, we can try to
evaluate the remaining separately.

Also, I guess that specifying individual sequences might not be easy
to use for users in some cases. For sequences owned by a column of a
table, users might want to specify them altogether, rather than
separately. For example, CREATE PUBLICATION ... FOR TABLE tab1 WITH
SEQUENCES means to add the table tab1 and its sequences to the
publication. For other sequences (i.e., not owned by any tables),
users might want to specify them individually.

Yeah, or we can have a syntax like CREATE PUBLICATION ... FOR TABLE
tab1 INCLUDE SEQUENCES. Normally, we use the WITH clause for options
(For example, CREATE SUBSCRIPTION ... WITH (streaming=...)).

--
With Regards,
Amit Kapila.

#29vignesh C
vignesh21@gmail.com
In reply to: Amul Sul (#26)
Re: Logical Replication of sequences

On Mon, 10 Jun 2024 at 12:24, Amul Sul <sulamul@gmail.com> wrote:

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:
[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm missing
something.

We'll require the lsn because the sequence LSN informs the user that
it has been synchronized up to the LSN in pg_subscription_seq. Since
we are not supporting incremental sync, the user will be able to
identify if he should run refresh sequences or not by checking the lsn
of the pg_subscription_seq and the lsn of the sequence(using
pg_sequence_state added) in the publisher. Also, this parallels our
implementation for pg_subscription_seq and will aid in expanding for
a) incremental synchronization and b) utilizing workers for
synchronization using sequence states if necessary.

How do you track sequence mapping with the publication?

In the publisher we use pg_publication_rel and
pg_publication_namespace for mapping the sequences with the
publication.

Regards,
Vignesh
Vignesh

#30vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#28)
Re: Logical Replication of sequences

On Mon, 10 Jun 2024 at 14:48, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jun 10, 2024 at 12:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Are you imagining the behavior for sequences associated with tables
differently than the ones defined by the CREATE SEQUENCE .. command? I
was thinking that users would associate sequences with publications
similar to what we do for tables for both cases. For example, they
need to explicitly mention the sequences they want to replicate by
commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE
PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR
SEQUENCES IN SCHEMA sch1;

In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1
should copy both the explicitly defined sequences and sequences
defined with the tables. Do you think a different variant for just
copying sequences implicitly associated with tables (say for identity
columns)?

Oh, I was thinking that your proposal was to copy literally all
sequences by REPLICA/REFRESH SEQUENCE command.

I am trying to keep the behavior as close to tables as possible.

But it seems to make
sense to explicitly specify the sequences they want to replicate. It
also means that they can create a publication that has only sequences.
In this case, even if they create a subscription for that publication,
we don't launch any apply workers for that subscription. Right?

Right, good point. I had not thought about this.

Also, given that the main use case (at least as the first step) is
version upgrade, do we really need to support SEQUENCES IN SCHEMA and
even FOR SEQUENCE?

At the very least, we can split the patch to move these variants to a
separate patch. Once the main patch is finalized, we can try to
evaluate the remaining separately.

I engaged in an offline discussion with Amit about strategizing the
division of patches to facilitate the review process. We agreed on the
following split: The first patch will encompass the setting and
getting of sequence values (core sequence changes). The second patch
will cover all changes on the publisher side related to "FOR ALL
SEQUENCES." The third patch will address subscriber side changes aimed
at synchronizing "FOR ALL SEQUENCES" publications. The fourth patch
will focus on supporting "FOR SEQUENCE" publication. Lastly, the fifth
patch will introduce support for "FOR ALL SEQUENCES IN SCHEMA"
publication.

I will work on this and share an updated patch for the same soon.

Regards,
Vignesh

#31Amul Sul
sulamul@gmail.com
In reply to: vignesh C (#29)
Re: Logical Replication of sequences

On Mon, Jun 10, 2024 at 5:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 12:24, Amul Sul <sulamul@gmail.com> wrote:

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com>

wrote:

[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm

missing

something.

We'll require the lsn because the sequence LSN informs the user that
it has been synchronized up to the LSN in pg_subscription_seq. Since
we are not supporting incremental sync, the user will be able to
identify if he should run refresh sequences or not by checking the lsn
of the pg_subscription_seq and the lsn of the sequence(using
pg_sequence_state added) in the publisher. Also, this parallels our
implementation for pg_subscription_seq and will aid in expanding for
a) incremental synchronization and b) utilizing workers for
synchronization using sequence states if necessary.

How do you track sequence mapping with the publication?

In the publisher we use pg_publication_rel and
pg_publication_namespace for mapping the sequences with the
publication.

Thanks for the explanation. I'm wondering what the complexity would be, if
we
wanted to do something similar on the subscriber side, i.e., tracking via
pg_subscription_rel.

Regards,
Amul

#32vignesh C
vignesh21@gmail.com
In reply to: Amul Sul (#31)
Re: Logical Replication of sequences

On Tue, 11 Jun 2024 at 09:41, Amul Sul <sulamul@gmail.com> wrote:

On Mon, Jun 10, 2024 at 5:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 12:24, Amul Sul <sulamul@gmail.com> wrote:

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:
[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm missing
something.

We'll require the lsn because the sequence LSN informs the user that
it has been synchronized up to the LSN in pg_subscription_seq. Since
we are not supporting incremental sync, the user will be able to
identify if he should run refresh sequences or not by checking the lsn
of the pg_subscription_seq and the lsn of the sequence(using
pg_sequence_state added) in the publisher. Also, this parallels our
implementation for pg_subscription_seq and will aid in expanding for
a) incremental synchronization and b) utilizing workers for
synchronization using sequence states if necessary.

How do you track sequence mapping with the publication?

In the publisher we use pg_publication_rel and
pg_publication_namespace for mapping the sequences with the
publication.

Thanks for the explanation. I'm wondering what the complexity would be, if we
wanted to do something similar on the subscriber side, i.e., tracking via
pg_subscription_rel.

Because we won't utilize sync workers to synchronize the sequence, and
the sequence won't necessitate sync states like init, sync,
finishedcopy, syncdone, ready, etc., initially, I considered keeping
the sequences separate. However, I'm ok with using pg_subscription_rel
as it could potentially help in enhancing incremental synchronization
and parallelizing later on.

Regards,
Vignesh

#33Masahiko Sawada
sawada.mshk@gmail.com
In reply to: vignesh C (#30)
Re: Logical Replication of sequences

On Tue, Jun 11, 2024 at 12:25 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 14:48, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jun 10, 2024 at 12:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Are you imagining the behavior for sequences associated with tables
differently than the ones defined by the CREATE SEQUENCE .. command? I
was thinking that users would associate sequences with publications
similar to what we do for tables for both cases. For example, they
need to explicitly mention the sequences they want to replicate by
commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE
PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR
SEQUENCES IN SCHEMA sch1;

In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1
should copy both the explicitly defined sequences and sequences
defined with the tables. Do you think a different variant for just
copying sequences implicitly associated with tables (say for identity
columns)?

Oh, I was thinking that your proposal was to copy literally all
sequences by REPLICA/REFRESH SEQUENCE command.

I am trying to keep the behavior as close to tables as possible.

But it seems to make
sense to explicitly specify the sequences they want to replicate. It
also means that they can create a publication that has only sequences.
In this case, even if they create a subscription for that publication,
we don't launch any apply workers for that subscription. Right?

Right, good point. I had not thought about this.

Also, given that the main use case (at least as the first step) is
version upgrade, do we really need to support SEQUENCES IN SCHEMA and
even FOR SEQUENCE?

At the very least, we can split the patch to move these variants to a
separate patch. Once the main patch is finalized, we can try to
evaluate the remaining separately.

I engaged in an offline discussion with Amit about strategizing the
division of patches to facilitate the review process. We agreed on the
following split: The first patch will encompass the setting and
getting of sequence values (core sequence changes). The second patch
will cover all changes on the publisher side related to "FOR ALL
SEQUENCES." The third patch will address subscriber side changes aimed
at synchronizing "FOR ALL SEQUENCES" publications. The fourth patch
will focus on supporting "FOR SEQUENCE" publication. Lastly, the fifth
patch will introduce support for "FOR ALL SEQUENCES IN SCHEMA"
publication.

I will work on this and share an updated patch for the same soon.

+1. Sounds like a good plan.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#34vignesh C
vignesh21@gmail.com
In reply to: Masahiko Sawada (#33)
Re: Logical Replication of sequences

On Tue, 11 Jun 2024 at 12:38, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Tue, Jun 11, 2024 at 12:25 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 14:48, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jun 10, 2024 at 12:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Are you imagining the behavior for sequences associated with tables
differently than the ones defined by the CREATE SEQUENCE .. command? I
was thinking that users would associate sequences with publications
similar to what we do for tables for both cases. For example, they
need to explicitly mention the sequences they want to replicate by
commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE
PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR
SEQUENCES IN SCHEMA sch1;

In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1
should copy both the explicitly defined sequences and sequences
defined with the tables. Do you think a different variant for just
copying sequences implicitly associated with tables (say for identity
columns)?

Oh, I was thinking that your proposal was to copy literally all
sequences by REPLICA/REFRESH SEQUENCE command.

I am trying to keep the behavior as close to tables as possible.

But it seems to make
sense to explicitly specify the sequences they want to replicate. It
also means that they can create a publication that has only sequences.
In this case, even if they create a subscription for that publication,
we don't launch any apply workers for that subscription. Right?

Right, good point. I had not thought about this.

Also, given that the main use case (at least as the first step) is
version upgrade, do we really need to support SEQUENCES IN SCHEMA and
even FOR SEQUENCE?

At the very least, we can split the patch to move these variants to a
separate patch. Once the main patch is finalized, we can try to
evaluate the remaining separately.

I engaged in an offline discussion with Amit about strategizing the
division of patches to facilitate the review process. We agreed on the
following split: The first patch will encompass the setting and
getting of sequence values (core sequence changes). The second patch
will cover all changes on the publisher side related to "FOR ALL
SEQUENCES." The third patch will address subscriber side changes aimed
at synchronizing "FOR ALL SEQUENCES" publications. The fourth patch
will focus on supporting "FOR SEQUENCE" publication. Lastly, the fifth
patch will introduce support for "FOR ALL SEQUENCES IN SCHEMA"
publication.

I will work on this and share an updated patch for the same soon.

+1. Sounds like a good plan.

Amit and I engaged in an offline discussion regarding the design and
contemplated that it could be like below:
1) CREATE PUBLICATION syntax enhancement:
CREATE PUBLICATION ... FOR ALL SEQUENCES;
The addition of a new column titled "all sequences" in the
pg_publication system table will signify whether the publication is
designated as all sequences publication or not.

2) CREATE SUBSCRIPTION -- no syntax change.
Upon creation of a subscription, the following additional steps will
be managed by the subscriber:
i) The subscriber will retrieve the list of sequences associated with
the subscription's publications.
ii) For each sequence: a) Retrieve the sequence value from the
publisher by invoking the pg_sequence_state function. b) Set the
sequence with the value obtained from the publisher. iv) Once the
subscription creation is completed, all sequence values will become
visible at the subscriber's end.

An alternative design approach could involve retrieving the sequence
list from the publisher during subscription creation and inserting the
sequences with an "init" state into the pg_subscription_rel system
table. These tasks could be executed by a single sequence sync worker,
which would:
i) Retrieve the list of sequences in the "init" state from the
pg_subscription_rel system table.
ii) Initiate a transaction.
iii) For each sequence: a) Obtain the sequence value from the
publisher by utilizing the pg_sequence_state function. b) Update the
sequence with the value obtained from the publisher.
iv) Commit the transaction.

The benefit with the second approach is that if there are large number
of sequences, the sequence sync can be enhanced to happen in parallel
and also if there are any locks held on the sequences in the
publisher, the sequence worker can wait to acquire the lock instead of
blocking the whole create subscription command which will delay the
initial copy of the tables too.

3) Refreshing the sequence can be achieved through the existing
command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change
here).
The subscriber identifies stale sequences, meaning sequences present
in pg_subscription_rel but absent from the publication, and removes
them from the pg_subscription_rel system table. The subscriber also
checks for newly added sequences in the publisher and synchronizes
their values from the publisher using the steps outlined in the
subscription creation process. It's worth noting that previously
synchronized sequences won't be synchronized again; the sequence sync
will occur solely for the newly added sequences.

4) Introducing a new command for refreshing all sequences: ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
The subscriber will remove stale sequences and add newly added
sequences from the publisher. Following this, it will re-synchronize
the sequence values for all sequences in the updated list from the
publisher, following the steps outlined in the subscription creation
process.

5) Incorporate the pg_sequence_state function to fetch the sequence
value from the publisher, along with the page LSN. Incorporate
SetSequence function, which will procure a new relfilenode for the
sequence and set the new relfilenode with the specified value. This
will facilitate rollback in case of any failures.

Thoughts?

Regards,
Vignesh

#35Masahiko Sawada
sawada.mshk@gmail.com
In reply to: vignesh C (#34)
Re: Logical Replication of sequences

On Tue, Jun 11, 2024 at 7:36 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 11 Jun 2024 at 12:38, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Tue, Jun 11, 2024 at 12:25 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 14:48, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jun 10, 2024 at 12:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Jun 10, 2024 at 3:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Fri, Jun 7, 2024 at 7:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Are you imagining the behavior for sequences associated with tables
differently than the ones defined by the CREATE SEQUENCE .. command? I
was thinking that users would associate sequences with publications
similar to what we do for tables for both cases. For example, they
need to explicitly mention the sequences they want to replicate by
commands like CREATE PUBLICATION ... FOR SEQUENCE s1, s2, ...; CREATE
PUBLICATION ... FOR ALL SEQUENCES, or CREATE PUBLICATION ... FOR
SEQUENCES IN SCHEMA sch1;

In this, variants FOR ALL SEQUENCES and SEQUENCES IN SCHEMA sch1
should copy both the explicitly defined sequences and sequences
defined with the tables. Do you think a different variant for just
copying sequences implicitly associated with tables (say for identity
columns)?

Oh, I was thinking that your proposal was to copy literally all
sequences by REPLICA/REFRESH SEQUENCE command.

I am trying to keep the behavior as close to tables as possible.

But it seems to make
sense to explicitly specify the sequences they want to replicate. It
also means that they can create a publication that has only sequences.
In this case, even if they create a subscription for that publication,
we don't launch any apply workers for that subscription. Right?

Right, good point. I had not thought about this.

Also, given that the main use case (at least as the first step) is
version upgrade, do we really need to support SEQUENCES IN SCHEMA and
even FOR SEQUENCE?

At the very least, we can split the patch to move these variants to a
separate patch. Once the main patch is finalized, we can try to
evaluate the remaining separately.

I engaged in an offline discussion with Amit about strategizing the
division of patches to facilitate the review process. We agreed on the
following split: The first patch will encompass the setting and
getting of sequence values (core sequence changes). The second patch
will cover all changes on the publisher side related to "FOR ALL
SEQUENCES." The third patch will address subscriber side changes aimed
at synchronizing "FOR ALL SEQUENCES" publications. The fourth patch
will focus on supporting "FOR SEQUENCE" publication. Lastly, the fifth
patch will introduce support for "FOR ALL SEQUENCES IN SCHEMA"
publication.

I will work on this and share an updated patch for the same soon.

+1. Sounds like a good plan.

Amit and I engaged in an offline discussion regarding the design and
contemplated that it could be like below:
1) CREATE PUBLICATION syntax enhancement:
CREATE PUBLICATION ... FOR ALL SEQUENCES;
The addition of a new column titled "all sequences" in the
pg_publication system table will signify whether the publication is
designated as all sequences publication or not.

The first approach sounds like we don't create entries for sequences
in pg_subscription_rel. In this case, how do we know all sequences
that we need to refresh when executing the REFRESH PUBLICATION
SEQUENCES command you mentioned below?

2) CREATE SUBSCRIPTION -- no syntax change.
Upon creation of a subscription, the following additional steps will
be managed by the subscriber:
i) The subscriber will retrieve the list of sequences associated with
the subscription's publications.
ii) For each sequence: a) Retrieve the sequence value from the
publisher by invoking the pg_sequence_state function. b) Set the
sequence with the value obtained from the publisher. iv) Once the
subscription creation is completed, all sequence values will become
visible at the subscriber's end.

Sequence values are always copied from the publisher? or does it
happen only when copy_data = true?

An alternative design approach could involve retrieving the sequence
list from the publisher during subscription creation and inserting the
sequences with an "init" state into the pg_subscription_rel system
table. These tasks could be executed by a single sequence sync worker,
which would:
i) Retrieve the list of sequences in the "init" state from the
pg_subscription_rel system table.
ii) Initiate a transaction.
iii) For each sequence: a) Obtain the sequence value from the
publisher by utilizing the pg_sequence_state function. b) Update the
sequence with the value obtained from the publisher.
iv) Commit the transaction.

The benefit with the second approach is that if there are large number
of sequences, the sequence sync can be enhanced to happen in parallel
and also if there are any locks held on the sequences in the
publisher, the sequence worker can wait to acquire the lock instead of
blocking the whole create subscription command which will delay the
initial copy of the tables too.

I prefer to have separate workers to sync sequences. Probably we can
start with a single worker and extend it to have multiple workers. BTW
the sequence-sync worker will be taken from
max_sync_workers_per_subscription pool?

Or yet another idea I came up with is that a tablesync worker will
synchronize both the table and sequences owned by the table. That is,
after the tablesync worker caught up with the apply worker, the
tablesync worker synchronizes sequences associated with the target
table as well. One benefit would be that at the time of initial table
sync being completed, the table and its sequence data are consistent.
As soon as new changes come to the table, it would become inconsistent
so it might not be helpful much, though. Also, sequences that are not
owned by any table will still need to be synchronized by someone.

3) Refreshing the sequence can be achieved through the existing
command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change
here).
The subscriber identifies stale sequences, meaning sequences present
in pg_subscription_rel but absent from the publication, and removes
them from the pg_subscription_rel system table. The subscriber also
checks for newly added sequences in the publisher and synchronizes
their values from the publisher using the steps outlined in the
subscription creation process. It's worth noting that previously
synchronized sequences won't be synchronized again; the sequence sync
will occur solely for the newly added sequences.

4) Introducing a new command for refreshing all sequences: ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
The subscriber will remove stale sequences and add newly added
sequences from the publisher. Following this, it will re-synchronize
the sequence values for all sequences in the updated list from the
publisher, following the steps outlined in the subscription creation
process.

The difference between 3) and 4) is whether or not to re-synchronize
the previously synchronized sequences. Do we really want to introduce
a new command for 4)? I felt that we can invent an option say
copy_all_sequence for the REFRESH PUBLICATION command to cover the 4)
case.

5) Incorporate the pg_sequence_state function to fetch the sequence
value from the publisher, along with the page LSN. Incorporate
SetSequence function, which will procure a new relfilenode for the
sequence and set the new relfilenode with the specified value. This
will facilitate rollback in case of any failures.

Does it mean that we create a new relfilenode for every update of the value?

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#36Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#34)
Re: Logical Replication of sequences

On Tue, Jun 11, 2024 at 4:06 PM vignesh C <vignesh21@gmail.com> wrote:

Amit and I engaged in an offline discussion regarding the design and
contemplated that it could be like below:

If I understand correctly, does this require the sequences to already
exist on the subscribing node before creating the subscription, or
will it also copy any non-existing sequences?

1) CREATE PUBLICATION syntax enhancement:
CREATE PUBLICATION ... FOR ALL SEQUENCES;
The addition of a new column titled "all sequences" in the
pg_publication system table will signify whether the publication is
designated as all sequences publication or not.

2) CREATE SUBSCRIPTION -- no syntax change.
Upon creation of a subscription, the following additional steps will
be managed by the subscriber:
i) The subscriber will retrieve the list of sequences associated with
the subscription's publications.
ii) For each sequence: a) Retrieve the sequence value from the
publisher by invoking the pg_sequence_state function. b) Set the
sequence with the value obtained from the publisher. iv) Once the
subscription creation is completed, all sequence values will become
visible at the subscriber's end.

An alternative design approach could involve retrieving the sequence
list from the publisher during subscription creation and inserting the
sequences with an "init" state into the pg_subscription_rel system
table. These tasks could be executed by a single sequence sync worker,
which would:
i) Retrieve the list of sequences in the "init" state from the
pg_subscription_rel system table.
ii) Initiate a transaction.
iii) For each sequence: a) Obtain the sequence value from the
publisher by utilizing the pg_sequence_state function. b) Update the
sequence with the value obtained from the publisher.
iv) Commit the transaction.

The benefit with the second approach is that if there are large number
of sequences, the sequence sync can be enhanced to happen in parallel
and also if there are any locks held on the sequences in the
publisher, the sequence worker can wait to acquire the lock instead of
blocking the whole create subscription command which will delay the
initial copy of the tables too.

Yeah w.r.t. this point second approach seems better.

3) Refreshing the sequence can be achieved through the existing
command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change
here).
The subscriber identifies stale sequences, meaning sequences present
in pg_subscription_rel but absent from the publication, and removes
them from the pg_subscription_rel system table. The subscriber also
checks for newly added sequences in the publisher and synchronizes
their values from the publisher using the steps outlined in the
subscription creation process. It's worth noting that previously
synchronized sequences won't be synchronized again; the sequence sync
will occur solely for the newly added sequences.

4) Introducing a new command for refreshing all sequences: ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
The subscriber will remove stale sequences and add newly added
sequences from the publisher. Following this, it will re-synchronize
the sequence values for all sequences in the updated list from the
publisher, following the steps outlined in the subscription creation
process.

Okay, this answers my first question: we will remove the sequences
that are removed from the publisher and add the new sequences. I don't
see any problem with this, but doesn't it seem like we are effectively
doing DDL replication only for sequences without having a
comprehensive plan for overall DDL replication?

5) Incorporate the pg_sequence_state function to fetch the sequence
value from the publisher, along with the page LSN. Incorporate
SetSequence function, which will procure a new relfilenode for the
sequence and set the new relfilenode with the specified value. This
will facilitate rollback in case of any failures.

I do not understand this point, you mean whenever we are fetching the
sequence value from the publisher we need to create a new relfilenode
on the subscriber? Why not just update the catalog tuple is
sufficient? Or this is for handling the ALTER SEQUENCE case?

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#37Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#35)
Re: Logical Replication of sequences

On Wed, Jun 12, 2024 at 10:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Tue, Jun 11, 2024 at 7:36 PM vignesh C <vignesh21@gmail.com> wrote:

1) CREATE PUBLICATION syntax enhancement:
CREATE PUBLICATION ... FOR ALL SEQUENCES;
The addition of a new column titled "all sequences" in the
pg_publication system table will signify whether the publication is
designated as all sequences publication or not.

The first approach sounds like we don't create entries for sequences
in pg_subscription_rel. In this case, how do we know all sequences
that we need to refresh when executing the REFRESH PUBLICATION
SEQUENCES command you mentioned below?

As per my understanding, we should be creating entries for sequences
in pg_subscription_rel similar to tables. The difference would be that
we won't need all the sync_states (i = initialize, d = data is being
copied, f = finished table copy, s = synchronized, r = ready) as we
don't need any synchronization with apply workers.

2) CREATE SUBSCRIPTION -- no syntax change.
Upon creation of a subscription, the following additional steps will
be managed by the subscriber:
i) The subscriber will retrieve the list of sequences associated with
the subscription's publications.
ii) For each sequence: a) Retrieve the sequence value from the
publisher by invoking the pg_sequence_state function. b) Set the
sequence with the value obtained from the publisher. iv) Once the
subscription creation is completed, all sequence values will become
visible at the subscriber's end.

Sequence values are always copied from the publisher? or does it
happen only when copy_data = true?

It is better to do it when "copy_data = true" to keep it compatible
with the table's behavior.

An alternative design approach could involve retrieving the sequence
list from the publisher during subscription creation and inserting the
sequences with an "init" state into the pg_subscription_rel system
table. These tasks could be executed by a single sequence sync worker,
which would:
i) Retrieve the list of sequences in the "init" state from the
pg_subscription_rel system table.
ii) Initiate a transaction.
iii) For each sequence: a) Obtain the sequence value from the
publisher by utilizing the pg_sequence_state function. b) Update the
sequence with the value obtained from the publisher.
iv) Commit the transaction.

The benefit with the second approach is that if there are large number
of sequences, the sequence sync can be enhanced to happen in parallel
and also if there are any locks held on the sequences in the
publisher, the sequence worker can wait to acquire the lock instead of
blocking the whole create subscription command which will delay the
initial copy of the tables too.

I prefer to have separate workers to sync sequences.

+1.

Probably we can
start with a single worker and extend it to have multiple workers.

Yeah, starting with a single worker sounds good for now. Do you think
we should sync all the sequences in a single transaction or have some
threshold value above which a different transaction would be required
or maybe a different sequence sync worker altogether? Now, having
multiple sequence-sync workers requires some synchronization so that
only a single worker is allocated for one sequence.

The simplest thing is to use a single sequence sync worker that syncs
all sequences in one transaction but with a large number of sequences,
it could be inefficient. OTOH, I am not sure if it would be a problem
in reality.

BTW
the sequence-sync worker will be taken from
max_sync_workers_per_subscription pool?

I think so.

Or yet another idea I came up with is that a tablesync worker will
synchronize both the table and sequences owned by the table. That is,
after the tablesync worker caught up with the apply worker, the
tablesync worker synchronizes sequences associated with the target
table as well. One benefit would be that at the time of initial table
sync being completed, the table and its sequence data are consistent.
As soon as new changes come to the table, it would become inconsistent
so it might not be helpful much, though. Also, sequences that are not
owned by any table will still need to be synchronized by someone.

The other thing to consider in this idea is that we somehow need to
distinguish the sequences owned by the table.

3) Refreshing the sequence can be achieved through the existing
command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change
here).
The subscriber identifies stale sequences, meaning sequences present
in pg_subscription_rel but absent from the publication, and removes
them from the pg_subscription_rel system table. The subscriber also
checks for newly added sequences in the publisher and synchronizes
their values from the publisher using the steps outlined in the
subscription creation process. It's worth noting that previously
synchronized sequences won't be synchronized again; the sequence sync
will occur solely for the newly added sequences.

4) Introducing a new command for refreshing all sequences: ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
The subscriber will remove stale sequences and add newly added
sequences from the publisher. Following this, it will re-synchronize
the sequence values for all sequences in the updated list from the
publisher, following the steps outlined in the subscription creation
process.

The difference between 3) and 4) is whether or not to re-synchronize
the previously synchronized sequences. Do we really want to introduce
a new command for 4)? I felt that we can invent an option say
copy_all_sequence for the REFRESH PUBLICATION command to cover the 4)
case.

Yeah, that is also an option but it could confuse along with copy_data
option. Say the user has selected copy_data = false but
copy_all_sequences = true then the first option indicates to *not*
copy the data of table and sequences and the second option indicates
to copy the sequences data which sounds contradictory. The other idea
is to have an option copy_existing_sequences (which indicates to copy
existing sequence values) but that also has somewhat the same drawback
as copy_all_sequences but to a lesser degree.

5) Incorporate the pg_sequence_state function to fetch the sequence
value from the publisher, along with the page LSN. Incorporate
SetSequence function, which will procure a new relfilenode for the
sequence and set the new relfilenode with the specified value. This
will facilitate rollback in case of any failures.

Does it mean that we create a new relfilenode for every update of the value?

We need it for initial sync so that if there is an error both the
sequence state in pg_subscription_rel and sequence values can be
rolled back together. However, it is unclear whether we need to create
a new relfilenode while copying existing sequences (say during ALTER
SUBSCRIPTION .. REFRESH PUBLICATION SEQUENCES, or whatever command we
decide)? Probably the answer lies in how we want to implement this
command. If we want to copy all sequence values during the command
itself then it is probably okay but if we want to handover this task
to the sequence-sync worker then we need some state management and a
new relfilenode so that on error both state and sequence values are
rolled back.

--
With Regards,
Amit Kapila.

#38vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#36)
Re: Logical Replication of sequences

On Wed, 12 Jun 2024 at 10:51, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Jun 11, 2024 at 4:06 PM vignesh C <vignesh21@gmail.com> wrote:

Amit and I engaged in an offline discussion regarding the design and
contemplated that it could be like below:

If I understand correctly, does this require the sequences to already
exist on the subscribing node before creating the subscription, or
will it also copy any non-existing sequences?

Sequences must exist in the subscriber; we'll synchronize only their
values. Any sequences that are not present in the subscriber will
trigger an error.

1) CREATE PUBLICATION syntax enhancement:
CREATE PUBLICATION ... FOR ALL SEQUENCES;
The addition of a new column titled "all sequences" in the
pg_publication system table will signify whether the publication is
designated as all sequences publication or not.

2) CREATE SUBSCRIPTION -- no syntax change.
Upon creation of a subscription, the following additional steps will
be managed by the subscriber:
i) The subscriber will retrieve the list of sequences associated with
the subscription's publications.
ii) For each sequence: a) Retrieve the sequence value from the
publisher by invoking the pg_sequence_state function. b) Set the
sequence with the value obtained from the publisher. iv) Once the
subscription creation is completed, all sequence values will become
visible at the subscriber's end.

An alternative design approach could involve retrieving the sequence
list from the publisher during subscription creation and inserting the
sequences with an "init" state into the pg_subscription_rel system
table. These tasks could be executed by a single sequence sync worker,
which would:
i) Retrieve the list of sequences in the "init" state from the
pg_subscription_rel system table.
ii) Initiate a transaction.
iii) For each sequence: a) Obtain the sequence value from the
publisher by utilizing the pg_sequence_state function. b) Update the
sequence with the value obtained from the publisher.
iv) Commit the transaction.

The benefit with the second approach is that if there are large number
of sequences, the sequence sync can be enhanced to happen in parallel
and also if there are any locks held on the sequences in the
publisher, the sequence worker can wait to acquire the lock instead of
blocking the whole create subscription command which will delay the
initial copy of the tables too.

Yeah w.r.t. this point second approach seems better.

ok

3) Refreshing the sequence can be achieved through the existing
command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change
here).
The subscriber identifies stale sequences, meaning sequences present
in pg_subscription_rel but absent from the publication, and removes
them from the pg_subscription_rel system table. The subscriber also
checks for newly added sequences in the publisher and synchronizes
their values from the publisher using the steps outlined in the
subscription creation process. It's worth noting that previously
synchronized sequences won't be synchronized again; the sequence sync
will occur solely for the newly added sequences.

4) Introducing a new command for refreshing all sequences: ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
The subscriber will remove stale sequences and add newly added
sequences from the publisher. Following this, it will re-synchronize
the sequence values for all sequences in the updated list from the
publisher, following the steps outlined in the subscription creation
process.

Okay, this answers my first question: we will remove the sequences
that are removed from the publisher and add the new sequences. I don't
see any problem with this, but doesn't it seem like we are effectively
doing DDL replication only for sequences without having a
comprehensive plan for overall DDL replication?

What I intended to convey is that we'll eliminate the sequences from
pg_subscription_rel. We won't facilitate the DDL replication of
sequences; instead, we anticipate users to create the sequences
themselves.

5) Incorporate the pg_sequence_state function to fetch the sequence
value from the publisher, along with the page LSN. Incorporate
SetSequence function, which will procure a new relfilenode for the
sequence and set the new relfilenode with the specified value. This
will facilitate rollback in case of any failures.

I do not understand this point, you mean whenever we are fetching the
sequence value from the publisher we need to create a new relfilenode
on the subscriber? Why not just update the catalog tuple is
sufficient? Or this is for handling the ALTER SEQUENCE case?

Sequences operate distinctively from tables. Alterations to sequences
reflect instantly in another session, even before committing the
transaction. To ensure the synchronization of sequence value and state
updates in pg_subscription_rel, we assign it a new relfilenode. This
strategy ensures that any potential errors allow for the rollback of
both the sequence state in pg_subscription_rel and the sequence values
simultaneously.

Regards,
Vignesh

#39Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#38)
Re: Logical Replication of sequences

On Wed, Jun 12, 2024 at 4:08 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 12 Jun 2024 at 10:51, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Jun 11, 2024 at 4:06 PM vignesh C <vignesh21@gmail.com> wrote:

Amit and I engaged in an offline discussion regarding the design and
contemplated that it could be like below:

If I understand correctly, does this require the sequences to already
exist on the subscribing node before creating the subscription, or
will it also copy any non-existing sequences?

Sequences must exist in the subscriber; we'll synchronize only their
values. Any sequences that are not present in the subscriber will
trigger an error.

Okay, that makes sense.

3) Refreshing the sequence can be achieved through the existing
command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change
here).
The subscriber identifies stale sequences, meaning sequences present
in pg_subscription_rel but absent from the publication, and removes
them from the pg_subscription_rel system table. The subscriber also
checks for newly added sequences in the publisher and synchronizes
their values from the publisher using the steps outlined in the
subscription creation process. It's worth noting that previously
synchronized sequences won't be synchronized again; the sequence sync
will occur solely for the newly added sequences.

4) Introducing a new command for refreshing all sequences: ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
The subscriber will remove stale sequences and add newly added
sequences from the publisher. Following this, it will re-synchronize
the sequence values for all sequences in the updated list from the
publisher, following the steps outlined in the subscription creation
process.

Okay, this answers my first question: we will remove the sequences
that are removed from the publisher and add the new sequences. I don't
see any problem with this, but doesn't it seem like we are effectively
doing DDL replication only for sequences without having a
comprehensive plan for overall DDL replication?

What I intended to convey is that we'll eliminate the sequences from
pg_subscription_rel. We won't facilitate the DDL replication of
sequences; instead, we anticipate users to create the sequences
themselves.

hmm okay.

5) Incorporate the pg_sequence_state function to fetch the sequence
value from the publisher, along with the page LSN. Incorporate
SetSequence function, which will procure a new relfilenode for the
sequence and set the new relfilenode with the specified value. This
will facilitate rollback in case of any failures.

I do not understand this point, you mean whenever we are fetching the
sequence value from the publisher we need to create a new relfilenode
on the subscriber? Why not just update the catalog tuple is
sufficient? Or this is for handling the ALTER SEQUENCE case?

Sequences operate distinctively from tables. Alterations to sequences
reflect instantly in another session, even before committing the
transaction. To ensure the synchronization of sequence value and state
updates in pg_subscription_rel, we assign it a new relfilenode. This
strategy ensures that any potential errors allow for the rollback of
both the sequence state in pg_subscription_rel and the sequence values
simultaneously.

So, you're saying that when we synchronize the sequence values on the
subscriber side, we will create a new relfilenode to allow reverting
to the old state of the sequence in case of an error or transaction
rollback? But why would we want to do that? Generally, even if you
call nextval() on a sequence and then roll back the transaction, the
sequence value doesn't revert to the old value. So, what specific
problem on the subscriber side are we trying to avoid by operating on
a new relfilenode?

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#40vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#39)
Re: Logical Replication of sequences

On Wed, 12 Jun 2024 at 17:09, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, Jun 12, 2024 at 4:08 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 12 Jun 2024 at 10:51, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Jun 11, 2024 at 4:06 PM vignesh C <vignesh21@gmail.com> wrote:

Amit and I engaged in an offline discussion regarding the design and
contemplated that it could be like below:

If I understand correctly, does this require the sequences to already
exist on the subscribing node before creating the subscription, or
will it also copy any non-existing sequences?

Sequences must exist in the subscriber; we'll synchronize only their
values. Any sequences that are not present in the subscriber will
trigger an error.

Okay, that makes sense.

3) Refreshing the sequence can be achieved through the existing
command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change
here).
The subscriber identifies stale sequences, meaning sequences present
in pg_subscription_rel but absent from the publication, and removes
them from the pg_subscription_rel system table. The subscriber also
checks for newly added sequences in the publisher and synchronizes
their values from the publisher using the steps outlined in the
subscription creation process. It's worth noting that previously
synchronized sequences won't be synchronized again; the sequence sync
will occur solely for the newly added sequences.

4) Introducing a new command for refreshing all sequences: ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
The subscriber will remove stale sequences and add newly added
sequences from the publisher. Following this, it will re-synchronize
the sequence values for all sequences in the updated list from the
publisher, following the steps outlined in the subscription creation
process.

Okay, this answers my first question: we will remove the sequences
that are removed from the publisher and add the new sequences. I don't
see any problem with this, but doesn't it seem like we are effectively
doing DDL replication only for sequences without having a
comprehensive plan for overall DDL replication?

What I intended to convey is that we'll eliminate the sequences from
pg_subscription_rel. We won't facilitate the DDL replication of
sequences; instead, we anticipate users to create the sequences
themselves.

hmm okay.

5) Incorporate the pg_sequence_state function to fetch the sequence
value from the publisher, along with the page LSN. Incorporate
SetSequence function, which will procure a new relfilenode for the
sequence and set the new relfilenode with the specified value. This
will facilitate rollback in case of any failures.

I do not understand this point, you mean whenever we are fetching the
sequence value from the publisher we need to create a new relfilenode
on the subscriber? Why not just update the catalog tuple is
sufficient? Or this is for handling the ALTER SEQUENCE case?

Sequences operate distinctively from tables. Alterations to sequences
reflect instantly in another session, even before committing the
transaction. To ensure the synchronization of sequence value and state
updates in pg_subscription_rel, we assign it a new relfilenode. This
strategy ensures that any potential errors allow for the rollback of
both the sequence state in pg_subscription_rel and the sequence values
simultaneously.

So, you're saying that when we synchronize the sequence values on the
subscriber side, we will create a new relfilenode to allow reverting
to the old state of the sequence in case of an error or transaction
rollback? But why would we want to do that? Generally, even if you
call nextval() on a sequence and then roll back the transaction, the
sequence value doesn't revert to the old value. So, what specific
problem on the subscriber side are we trying to avoid by operating on
a new relfilenode?

Let's consider a situation where we have two sequences: seq1 with a
value of 100 and seq2 with a value of 200. Now, let's say seq1 is
synced and updated to 100, then we attempt to synchronize seq2,
there's a failure due to the sequence not existing or encountering
some other issue. In this scenario, we don't want to halt operations
where seq1 is synchronized, but the sequence state for sequence isn't
changed to "ready" in pg_subscription_rel.
Updating the sequence data directly reflects the sequence change
immediately. However, if we assign a new relfile node for the sequence
and update the sequence value for the new relfile node, until the
transaction is committed, other concurrent users will still be
utilizing the old relfile node for the sequence, and only the old data
will be visible. Once all sequences are synchronized, and the sequence
state is updated in pg_subscription_rel, the transaction will either
be committed or aborted. If committed, users will be able to observe
the new sequence values because the sequences will be updated with the
new relfile node containing the updated sequence value.

Regards,
Vignesh

#41Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#40)
Re: Logical Replication of sequences

On Thu, Jun 13, 2024 at 10:10 AM vignesh C <vignesh21@gmail.com> wrote:

So, you're saying that when we synchronize the sequence values on the
subscriber side, we will create a new relfilenode to allow reverting
to the old state of the sequence in case of an error or transaction
rollback? But why would we want to do that? Generally, even if you
call nextval() on a sequence and then roll back the transaction, the
sequence value doesn't revert to the old value. So, what specific
problem on the subscriber side are we trying to avoid by operating on
a new relfilenode?

Let's consider a situation where we have two sequences: seq1 with a
value of 100 and seq2 with a value of 200. Now, let's say seq1 is
synced and updated to 100, then we attempt to synchronize seq2,
there's a failure due to the sequence not existing or encountering
some other issue. In this scenario, we don't want to halt operations
where seq1 is synchronized, but the sequence state for sequence isn't
changed to "ready" in pg_subscription_rel.

Thanks for the explanation, but I am still not getting it completely,
do you mean to say unless all the sequences are not synced any of the
sequences would not be marked "ready" in pg_subscription_rel? Is that
necessary? I mean why we can not sync the sequences one by one and
mark them ready? Why it is necessary to either have all the sequences
synced or none of them?

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#42vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#41)
Re: Logical Replication of sequences

On Thu, 13 Jun 2024 at 10:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Jun 13, 2024 at 10:10 AM vignesh C <vignesh21@gmail.com> wrote:

So, you're saying that when we synchronize the sequence values on the
subscriber side, we will create a new relfilenode to allow reverting
to the old state of the sequence in case of an error or transaction
rollback? But why would we want to do that? Generally, even if you
call nextval() on a sequence and then roll back the transaction, the
sequence value doesn't revert to the old value. So, what specific
problem on the subscriber side are we trying to avoid by operating on
a new relfilenode?

Let's consider a situation where we have two sequences: seq1 with a
value of 100 and seq2 with a value of 200. Now, let's say seq1 is
synced and updated to 100, then we attempt to synchronize seq2,
there's a failure due to the sequence not existing or encountering
some other issue. In this scenario, we don't want to halt operations
where seq1 is synchronized, but the sequence state for sequence isn't
changed to "ready" in pg_subscription_rel.

Thanks for the explanation, but I am still not getting it completely,
do you mean to say unless all the sequences are not synced any of the
sequences would not be marked "ready" in pg_subscription_rel? Is that
necessary? I mean why we can not sync the sequences one by one and
mark them ready? Why it is necessary to either have all the sequences
synced or none of them?

Since updating the sequence is one operation and setting
pg_subscription_rel is another, I was trying to avoid a situation
where the sequence is updated but its state is not reflected in
pg_subscription_rel. It seems you are suggesting that it's acceptable
for the sequence to be updated even if its state isn't updated in
pg_subscription_rel, and in such cases, the sequence value does not
need to be reverted.

Regards,
Vignesh

#43Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#42)
Re: Logical Replication of sequences

On Thu, Jun 13, 2024 at 11:53 AM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 13 Jun 2024 at 10:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:

Thanks for the explanation, but I am still not getting it completely,
do you mean to say unless all the sequences are not synced any of the
sequences would not be marked "ready" in pg_subscription_rel? Is that
necessary? I mean why we can not sync the sequences one by one and
mark them ready? Why it is necessary to either have all the sequences
synced or none of them?

Since updating the sequence is one operation and setting
pg_subscription_rel is another, I was trying to avoid a situation
where the sequence is updated but its state is not reflected in
pg_subscription_rel. It seems you are suggesting that it's acceptable
for the sequence to be updated even if its state isn't updated in
pg_subscription_rel, and in such cases, the sequence value does not
need to be reverted.

Right, the complexity we're adding to achieve a behavior that may not
be truly desirable is a concern. For instance, if we mark the status
as ready but do not sync the sequences, it could lead to issues.
However, if we have synced some sequences but encounter a failure
without marking the status as ready, I don't consider it inconsistent
in any way. But anyway, now I understand your thinking behind that so
it's a good idea to leave this design behavior for a later decision.
Gathering more opinions and insights during later stages will provide
a clearer perspective on how to proceed with this aspect. Thanks.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#44Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#37)
Re: Logical Replication of sequences

On Wed, Jun 12, 2024 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 12, 2024 at 10:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Tue, Jun 11, 2024 at 7:36 PM vignesh C <vignesh21@gmail.com> wrote:

1) CREATE PUBLICATION syntax enhancement:
CREATE PUBLICATION ... FOR ALL SEQUENCES;
The addition of a new column titled "all sequences" in the
pg_publication system table will signify whether the publication is
designated as all sequences publication or not.

The first approach sounds like we don't create entries for sequences
in pg_subscription_rel. In this case, how do we know all sequences
that we need to refresh when executing the REFRESH PUBLICATION
SEQUENCES command you mentioned below?

As per my understanding, we should be creating entries for sequences
in pg_subscription_rel similar to tables. The difference would be that
we won't need all the sync_states (i = initialize, d = data is being
copied, f = finished table copy, s = synchronized, r = ready) as we
don't need any synchronization with apply workers.

Agreed.

2) CREATE SUBSCRIPTION -- no syntax change.
Upon creation of a subscription, the following additional steps will
be managed by the subscriber:
i) The subscriber will retrieve the list of sequences associated with
the subscription's publications.
ii) For each sequence: a) Retrieve the sequence value from the
publisher by invoking the pg_sequence_state function. b) Set the
sequence with the value obtained from the publisher. iv) Once the
subscription creation is completed, all sequence values will become
visible at the subscriber's end.

Sequence values are always copied from the publisher? or does it
happen only when copy_data = true?

It is better to do it when "copy_data = true" to keep it compatible
with the table's behavior.

+1

Probably we can
start with a single worker and extend it to have multiple workers.

Yeah, starting with a single worker sounds good for now. Do you think
we should sync all the sequences in a single transaction or have some
threshold value above which a different transaction would be required
or maybe a different sequence sync worker altogether? Now, having
multiple sequence-sync workers requires some synchronization so that
only a single worker is allocated for one sequence.

The simplest thing is to use a single sequence sync worker that syncs
all sequences in one transaction but with a large number of sequences,
it could be inefficient. OTOH, I am not sure if it would be a problem
in reality.

I think that we can start with using a single worker and one
transaction, and measure the performance with a large number of
sequences.

Or yet another idea I came up with is that a tablesync worker will
synchronize both the table and sequences owned by the table. That is,
after the tablesync worker caught up with the apply worker, the
tablesync worker synchronizes sequences associated with the target
table as well. One benefit would be that at the time of initial table
sync being completed, the table and its sequence data are consistent.

Correction; it's not guaranteed that the sequence data and table data
are consistent even in this case since the tablesync worker could get
on-disk sequence data that might have already been updated.

As soon as new changes come to the table, it would become inconsistent
so it might not be helpful much, though. Also, sequences that are not
owned by any table will still need to be synchronized by someone.

The other thing to consider in this idea is that we somehow need to
distinguish the sequences owned by the table.

I think we can check pg_depend. The owned sequences reference to the table.

3) Refreshing the sequence can be achieved through the existing
command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change
here).
The subscriber identifies stale sequences, meaning sequences present
in pg_subscription_rel but absent from the publication, and removes
them from the pg_subscription_rel system table. The subscriber also
checks for newly added sequences in the publisher and synchronizes
their values from the publisher using the steps outlined in the
subscription creation process. It's worth noting that previously
synchronized sequences won't be synchronized again; the sequence sync
will occur solely for the newly added sequences.

4) Introducing a new command for refreshing all sequences: ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
The subscriber will remove stale sequences and add newly added
sequences from the publisher. Following this, it will re-synchronize
the sequence values for all sequences in the updated list from the
publisher, following the steps outlined in the subscription creation
process.

The difference between 3) and 4) is whether or not to re-synchronize
the previously synchronized sequences. Do we really want to introduce
a new command for 4)? I felt that we can invent an option say
copy_all_sequence for the REFRESH PUBLICATION command to cover the 4)
case.

Yeah, that is also an option but it could confuse along with copy_data
option. Say the user has selected copy_data = false but
copy_all_sequences = true then the first option indicates to *not*
copy the data of table and sequences and the second option indicates
to copy the sequences data which sounds contradictory. The other idea
is to have an option copy_existing_sequences (which indicates to copy
existing sequence values) but that also has somewhat the same drawback
as copy_all_sequences but to a lesser degree.

Good point. And I understood that the REFRESH PUBLICATION SEQUENCES
command would be helpful when users want to synchronize sequences
between two nodes before upgrading.

5) Incorporate the pg_sequence_state function to fetch the sequence
value from the publisher, along with the page LSN. Incorporate
SetSequence function, which will procure a new relfilenode for the
sequence and set the new relfilenode with the specified value. This
will facilitate rollback in case of any failures.

Does it mean that we create a new relfilenode for every update of the value?

We need it for initial sync so that if there is an error both the
sequence state in pg_subscription_rel and sequence values can be
rolled back together.

Agreed.

However, it is unclear whether we need to create
a new relfilenode while copying existing sequences (say during ALTER
SUBSCRIPTION .. REFRESH PUBLICATION SEQUENCES, or whatever command we
decide)? Probably the answer lies in how we want to implement this
command. If we want to copy all sequence values during the command
itself then it is probably okay but if we want to handover this task
to the sequence-sync worker then we need some state management and a
new relfilenode so that on error both state and sequence values are
rolled back.

What state transition of pg_subscription_rel entries for sequences do
we need while copying sequences values? For example, we insert an
entry with 'init' state at CREATE SUBSCRIPTION and then the
sequence-sync worker updates to 'ready' and copies the sequence data.
And at REFRESH PUBLICATION SEQUENCES, we update the state back to
'init' again so that the sequence-sync worker can process it? Given
REFRESH PUBLICATION SEQUENCES won't be executed very frequently, it
might be acceptable to transactionally update sequence values.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#45Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#44)
Re: Logical Replication of sequences

On Thu, Jun 13, 2024 at 1:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Jun 12, 2024 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Yeah, starting with a single worker sounds good for now. Do you think
we should sync all the sequences in a single transaction or have some
threshold value above which a different transaction would be required
or maybe a different sequence sync worker altogether? Now, having
multiple sequence-sync workers requires some synchronization so that
only a single worker is allocated for one sequence.

The simplest thing is to use a single sequence sync worker that syncs
all sequences in one transaction but with a large number of sequences,
it could be inefficient. OTOH, I am not sure if it would be a problem
in reality.

I think that we can start with using a single worker and one
transaction, and measure the performance with a large number of
sequences.

Fair enough. However, this raises the question Dilip and Vignesh are
discussing whether we need a new relfilenode for sequence update even
during initial sync? As per my understanding, the idea is that similar
to tables, the CREATE SUBSCRIPTION command (with copy_data = true)
will create the new sequence entries in pg_subscription_rel with the
state as 'i'. Then the sequence-sync worker would start a transaction
and one-by-one copy the latest sequence values for each sequence (that
has state as 'i' in pg_subscription_rel) and mark its state as ready
'r' and commit the transaction. Now if there is an error during this
operation it will restart the entire operation. The idea of creating a
new relfilenode is to handle the error so that if there is a rollback,
the sequence state will be rolled back to 'i' and the sequence value
will also be rolled back. The other option could be that we update the
sequence value without a new relfilenode and if the transaction rolled
back then only the sequence's state will be rolled back to 'i'. This
would work with a minor inconsistency that sequence values will be
up-to-date even when the sequence state is 'i' in pg_subscription_rel.
I am not sure if that matters because anyway, they can quickly be
out-of-sync with the publisher again.

Now, say we don't want to maintain the state of sequences for initial
sync at all then after the error how will we detect if there are any
pending sequences to be synced? One possibility is that we maintain a
subscription level flag 'subsequencesync' in 'pg_subscription' to
indicate whether sequences need sync. This flag would indicate whether
to sync all the sequences in pg_susbcription_rel. This would mean that
if there is an error while syncing the sequences we will resync all
the sequences again. This could be acceptable considering the chances
of error during sequence sync are low. The benefit is that both the
REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same
idea and sync all sequences without needing a new relfilenode. Users
can always refer 'subsequencesync' flag in 'pg_subscription' to see if
all the sequences are synced after executing the command.

Or yet another idea I came up with is that a tablesync worker will
synchronize both the table and sequences owned by the table. That is,
after the tablesync worker caught up with the apply worker, the
tablesync worker synchronizes sequences associated with the target
table as well. One benefit would be that at the time of initial table
sync being completed, the table and its sequence data are consistent.

Correction; it's not guaranteed that the sequence data and table data
are consistent even in this case since the tablesync worker could get
on-disk sequence data that might have already been updated.

The benefit of this approach is not clear to me. Our aim is to sync
all sequences before the upgrade, so not sure if this helps because
anyway both table values and corresponding sequences can again be
out-of-sync very quickly.

3) Refreshing the sequence can be achieved through the existing
command: ALTER SUBSCRIPTION ... REFRESH PUBLICATION(no syntax change
here).
The subscriber identifies stale sequences, meaning sequences present
in pg_subscription_rel but absent from the publication, and removes
them from the pg_subscription_rel system table. The subscriber also
checks for newly added sequences in the publisher and synchronizes
their values from the publisher using the steps outlined in the
subscription creation process. It's worth noting that previously
synchronized sequences won't be synchronized again; the sequence sync
will occur solely for the newly added sequences.

4) Introducing a new command for refreshing all sequences: ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
The subscriber will remove stale sequences and add newly added
sequences from the publisher. Following this, it will re-synchronize
the sequence values for all sequences in the updated list from the
publisher, following the steps outlined in the subscription creation
process.

The difference between 3) and 4) is whether or not to re-synchronize
the previously synchronized sequences. Do we really want to introduce
a new command for 4)? I felt that we can invent an option say
copy_all_sequence for the REFRESH PUBLICATION command to cover the 4)
case.

Yeah, that is also an option but it could confuse along with copy_data
option. Say the user has selected copy_data = false but
copy_all_sequences = true then the first option indicates to *not*
copy the data of table and sequences and the second option indicates
to copy the sequences data which sounds contradictory. The other idea
is to have an option copy_existing_sequences (which indicates to copy
existing sequence values) but that also has somewhat the same drawback
as copy_all_sequences but to a lesser degree.

Good point. And I understood that the REFRESH PUBLICATION SEQUENCES
command would be helpful when users want to synchronize sequences
between two nodes before upgrading.

Right.

5) Incorporate the pg_sequence_state function to fetch the sequence
value from the publisher, along with the page LSN. Incorporate
SetSequence function, which will procure a new relfilenode for the
sequence and set the new relfilenode with the specified value. This
will facilitate rollback in case of any failures.

Does it mean that we create a new relfilenode for every update of the value?

We need it for initial sync so that if there is an error both the
sequence state in pg_subscription_rel and sequence values can be
rolled back together.

Agreed.

However, it is unclear whether we need to create
a new relfilenode while copying existing sequences (say during ALTER
SUBSCRIPTION .. REFRESH PUBLICATION SEQUENCES, or whatever command we
decide)? Probably the answer lies in how we want to implement this
command. If we want to copy all sequence values during the command
itself then it is probably okay but if we want to handover this task
to the sequence-sync worker then we need some state management and a
new relfilenode so that on error both state and sequence values are
rolled back.

What state transition of pg_subscription_rel entries for sequences do
we need while copying sequences values? For example, we insert an
entry with 'init' state at CREATE SUBSCRIPTION and then the
sequence-sync worker updates to 'ready' and copies the sequence data.
And at REFRESH PUBLICATION SEQUENCES, we update the state back to
'init' again so that the sequence-sync worker can process it? Given
REFRESH PUBLICATION SEQUENCES won't be executed very frequently, it
might be acceptable to transactionally update sequence values.

Do you mean that sync the sequences during the REFRESH PUBLICATION
SEQUENCES command itself? If so, there is an argument that we can do
the same during CREATE SUBSCRIPTION. It would be beneficial to keep
the method to sync the sequences same for both the CREATE and REFRESH
commands. I have speculated on one idea above and would be happy to
see your thoughts.

--
With Regards,
Amit Kapila.

#46Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#45)
Re: Logical Replication of sequences

On Thu, Jun 13, 2024 at 7:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Jun 13, 2024 at 1:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Jun 12, 2024 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Yeah, starting with a single worker sounds good for now. Do you think
we should sync all the sequences in a single transaction or have some
threshold value above which a different transaction would be required
or maybe a different sequence sync worker altogether? Now, having
multiple sequence-sync workers requires some synchronization so that
only a single worker is allocated for one sequence.

The simplest thing is to use a single sequence sync worker that syncs
all sequences in one transaction but with a large number of sequences,
it could be inefficient. OTOH, I am not sure if it would be a problem
in reality.

I think that we can start with using a single worker and one
transaction, and measure the performance with a large number of
sequences.

Fair enough. However, this raises the question Dilip and Vignesh are
discussing whether we need a new relfilenode for sequence update even
during initial sync? As per my understanding, the idea is that similar
to tables, the CREATE SUBSCRIPTION command (with copy_data = true)
will create the new sequence entries in pg_subscription_rel with the
state as 'i'. Then the sequence-sync worker would start a transaction
and one-by-one copy the latest sequence values for each sequence (that
has state as 'i' in pg_subscription_rel) and mark its state as ready
'r' and commit the transaction. Now if there is an error during this
operation it will restart the entire operation. The idea of creating a
new relfilenode is to handle the error so that if there is a rollback,
the sequence state will be rolled back to 'i' and the sequence value
will also be rolled back. The other option could be that we update the
sequence value without a new relfilenode and if the transaction rolled
back then only the sequence's state will be rolled back to 'i'. This
would work with a minor inconsistency that sequence values will be
up-to-date even when the sequence state is 'i' in pg_subscription_rel.
I am not sure if that matters because anyway, they can quickly be
out-of-sync with the publisher again.

I think it would be fine in many cases even if the sequence value is
up-to-date even when the sequence state is 'i' in pg_subscription_rel.
But the case we would like to avoid is where suppose the sequence-sync
worker does both synchronizing sequence values and updating the
sequence states for all sequences in one transaction, and if there is
an error we end up retrying the synchronization for all sequences.

Now, say we don't want to maintain the state of sequences for initial
sync at all then after the error how will we detect if there are any
pending sequences to be synced? One possibility is that we maintain a
subscription level flag 'subsequencesync' in 'pg_subscription' to
indicate whether sequences need sync. This flag would indicate whether
to sync all the sequences in pg_susbcription_rel. This would mean that
if there is an error while syncing the sequences we will resync all
the sequences again. This could be acceptable considering the chances
of error during sequence sync are low. The benefit is that both the
REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same
idea and sync all sequences without needing a new relfilenode. Users
can always refer 'subsequencesync' flag in 'pg_subscription' to see if
all the sequences are synced after executing the command.

I think that REFRESH PUBLICATION {SEQUENCES} can be executed even
while the sequence-sync worker is synchronizing sequences. In this
case, the worker might not see new sequences added by the concurrent
REFRESH PUBLICATION {SEQUENCES} command since it's already running.
The worker could end up marking the subsequencesync as completed while
not synchronizing these new sequences.

Or yet another idea I came up with is that a tablesync worker will
synchronize both the table and sequences owned by the table. That is,
after the tablesync worker caught up with the apply worker, the
tablesync worker synchronizes sequences associated with the target
table as well. One benefit would be that at the time of initial table
sync being completed, the table and its sequence data are consistent.

Correction; it's not guaranteed that the sequence data and table data
are consistent even in this case since the tablesync worker could get
on-disk sequence data that might have already been updated.

The benefit of this approach is not clear to me. Our aim is to sync
all sequences before the upgrade, so not sure if this helps because
anyway both table values and corresponding sequences can again be
out-of-sync very quickly.

Right.

Given that our aim is to sync all sequences before the upgrade, do we
need to synchronize sequences even at CREATE SUBSCRIPTION time? In
cases where there are a large number of sequences, synchronizing
sequences in addition to tables could be overhead and make less sense,
because sequences can again be out-of-sync quickly and typically
CREATE SUBSCRIPTION is not created just before the upgrade.

5) Incorporate the pg_sequence_state function to fetch the sequence
value from the publisher, along with the page LSN. Incorporate
SetSequence function, which will procure a new relfilenode for the
sequence and set the new relfilenode with the specified value. This
will facilitate rollback in case of any failures.

Does it mean that we create a new relfilenode for every update of the value?

We need it for initial sync so that if there is an error both the
sequence state in pg_subscription_rel and sequence values can be
rolled back together.

Agreed.

However, it is unclear whether we need to create
a new relfilenode while copying existing sequences (say during ALTER
SUBSCRIPTION .. REFRESH PUBLICATION SEQUENCES, or whatever command we
decide)? Probably the answer lies in how we want to implement this
command. If we want to copy all sequence values during the command
itself then it is probably okay but if we want to handover this task
to the sequence-sync worker then we need some state management and a
new relfilenode so that on error both state and sequence values are
rolled back.

What state transition of pg_subscription_rel entries for sequences do
we need while copying sequences values? For example, we insert an
entry with 'init' state at CREATE SUBSCRIPTION and then the
sequence-sync worker updates to 'ready' and copies the sequence data.
And at REFRESH PUBLICATION SEQUENCES, we update the state back to
'init' again so that the sequence-sync worker can process it? Given
REFRESH PUBLICATION SEQUENCES won't be executed very frequently, it
might be acceptable to transactionally update sequence values.

Do you mean that sync the sequences during the REFRESH PUBLICATION
SEQUENCES command itself? If so, there is an argument that we can do
the same during CREATE SUBSCRIPTION. It would be beneficial to keep
the method to sync the sequences same for both the CREATE and REFRESH
commands. I have speculated on one idea above and would be happy to
see your thoughts.

I meant that the REFRESH PUBLICATION SEQUENCES command updates all
sequence states in pg_subscription_rel to 'init' state, and the
sequence-sync worker can do the synchronization work. We use the same
method for both the CREATE SUBSCRIPTION and REFRESH PUBLICATION
{SEQUENCES} commands.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#47Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#45)
Re: Logical Replication of sequences

On Thu, Jun 13, 2024 at 03:36:05PM +0530, Amit Kapila wrote:

Fair enough. However, this raises the question Dilip and Vignesh are
discussing whether we need a new relfilenode for sequence update even
during initial sync? As per my understanding, the idea is that similar
to tables, the CREATE SUBSCRIPTION command (with copy_data = true)
will create the new sequence entries in pg_subscription_rel with the
state as 'i'. Then the sequence-sync worker would start a transaction
and one-by-one copy the latest sequence values for each sequence (that
has state as 'i' in pg_subscription_rel) and mark its state as ready
'r' and commit the transaction. Now if there is an error during this
operation it will restart the entire operation.

Hmm. You mean to use only one transaction for all the sequences?
I've heard about deployments with a lot of them. Could it be a
problem to process them in batches, as well? If you maintain a state
for each one of them in pg_subscription_rel, it does not strike me as
an issue, while being more flexible than an all-or-nothing.

The idea of creating a
new relfilenode is to handle the error so that if there is a rollback,
the sequence state will be rolled back to 'i' and the sequence value
will also be rolled back. The other option could be that we update the
sequence value without a new relfilenode and if the transaction rolled
back then only the sequence's state will be rolled back to 'i'. This
would work with a minor inconsistency that sequence values will be
up-to-date even when the sequence state is 'i' in pg_subscription_rel.
I am not sure if that matters because anyway, they can quickly be
out-of-sync with the publisher again.

Seeing a mention to relfilenodes specifically for sequences freaks me
out a bit, because there's some work I have been doing in this area
and sequences may not have a need for a physical relfilenode at all.
But I guess that you refer to the fact that like tables, relfilenodes
would only be created as required because anything you'd do in the
apply worker path would just call some of the routines of sequence.h,
right?

Now, say we don't want to maintain the state of sequences for initial
sync at all then after the error how will we detect if there are any
pending sequences to be synced? One possibility is that we maintain a
subscription level flag 'subsequencesync' in 'pg_subscription' to
indicate whether sequences need sync. This flag would indicate whether
to sync all the sequences in pg_susbcription_rel. This would mean that
if there is an error while syncing the sequences we will resync all
the sequences again. This could be acceptable considering the chances
of error during sequence sync are low.

There could be multiple subscriptions to a single database that point
to the same set of sequences. Is there any conflict issue to worry
about here?

The benefit is that both the
REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same
idea and sync all sequences without needing a new relfilenode. Users
can always refer 'subsequencesync' flag in 'pg_subscription' to see if
all the sequences are synced after executing the command.

That would be cheaper, indeed. Isn't a boolean too limiting?
Isn't that something you'd want to track with a LSN as "the point in
WAL where all the sequences have been synced"?

The approach of doing all the sync work from the subscriber, while
having a command that can be kicked from the subscriber side is a good
user experience.
--
Michael

#48Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#46)
Re: Logical Replication of sequences

On Thu, Jun 13, 2024 at 6:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Thu, Jun 13, 2024 at 7:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Jun 13, 2024 at 1:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Jun 12, 2024 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Yeah, starting with a single worker sounds good for now. Do you think
we should sync all the sequences in a single transaction or have some
threshold value above which a different transaction would be required
or maybe a different sequence sync worker altogether? Now, having
multiple sequence-sync workers requires some synchronization so that
only a single worker is allocated for one sequence.

The simplest thing is to use a single sequence sync worker that syncs
all sequences in one transaction but with a large number of sequences,
it could be inefficient. OTOH, I am not sure if it would be a problem
in reality.

I think that we can start with using a single worker and one
transaction, and measure the performance with a large number of
sequences.

Fair enough. However, this raises the question Dilip and Vignesh are
discussing whether we need a new relfilenode for sequence update even
during initial sync? As per my understanding, the idea is that similar
to tables, the CREATE SUBSCRIPTION command (with copy_data = true)
will create the new sequence entries in pg_subscription_rel with the
state as 'i'. Then the sequence-sync worker would start a transaction
and one-by-one copy the latest sequence values for each sequence (that
has state as 'i' in pg_subscription_rel) and mark its state as ready
'r' and commit the transaction. Now if there is an error during this
operation it will restart the entire operation. The idea of creating a
new relfilenode is to handle the error so that if there is a rollback,
the sequence state will be rolled back to 'i' and the sequence value
will also be rolled back. The other option could be that we update the
sequence value without a new relfilenode and if the transaction rolled
back then only the sequence's state will be rolled back to 'i'. This
would work with a minor inconsistency that sequence values will be
up-to-date even when the sequence state is 'i' in pg_subscription_rel.
I am not sure if that matters because anyway, they can quickly be
out-of-sync with the publisher again.

I think it would be fine in many cases even if the sequence value is
up-to-date even when the sequence state is 'i' in pg_subscription_rel.
But the case we would like to avoid is where suppose the sequence-sync
worker does both synchronizing sequence values and updating the
sequence states for all sequences in one transaction, and if there is
an error we end up retrying the synchronization for all sequences.

The one idea to avoid this is to update sequences in chunks (say 100
or some threshold number of sequences in one transaction). Then we
would only redo the sync for the last and pending set of sequences.

Now, say we don't want to maintain the state of sequences for initial
sync at all then after the error how will we detect if there are any
pending sequences to be synced? One possibility is that we maintain a
subscription level flag 'subsequencesync' in 'pg_subscription' to
indicate whether sequences need sync. This flag would indicate whether
to sync all the sequences in pg_susbcription_rel. This would mean that
if there is an error while syncing the sequences we will resync all
the sequences again. This could be acceptable considering the chances
of error during sequence sync are low. The benefit is that both the
REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same
idea and sync all sequences without needing a new relfilenode. Users
can always refer 'subsequencesync' flag in 'pg_subscription' to see if
all the sequences are synced after executing the command.

I think that REFRESH PUBLICATION {SEQUENCES} can be executed even
while the sequence-sync worker is synchronizing sequences. In this
case, the worker might not see new sequences added by the concurrent
REFRESH PUBLICATION {SEQUENCES} command since it's already running.
The worker could end up marking the subsequencesync as completed while
not synchronizing these new sequences.

This is possible but we could avoid REFRESH PUBLICATION {SEQUENCES} by
not allowing to change the subsequencestate during the time
sequence-worker is syncing the sequences. This could be restrictive
but there doesn't seem to be cases where user would like to
immediately refresh sequences after creating the subscription.

Or yet another idea I came up with is that a tablesync worker will
synchronize both the table and sequences owned by the table. That is,
after the tablesync worker caught up with the apply worker, the
tablesync worker synchronizes sequences associated with the target
table as well. One benefit would be that at the time of initial table
sync being completed, the table and its sequence data are consistent.

Correction; it's not guaranteed that the sequence data and table data
are consistent even in this case since the tablesync worker could get
on-disk sequence data that might have already been updated.

The benefit of this approach is not clear to me. Our aim is to sync
all sequences before the upgrade, so not sure if this helps because
anyway both table values and corresponding sequences can again be
out-of-sync very quickly.

Right.

Given that our aim is to sync all sequences before the upgrade, do we
need to synchronize sequences even at CREATE SUBSCRIPTION time? In
cases where there are a large number of sequences, synchronizing
sequences in addition to tables could be overhead and make less sense,
because sequences can again be out-of-sync quickly and typically
CREATE SUBSCRIPTION is not created just before the upgrade.

I think for the upgrade one should be creating a subscription just
before the upgrade. Isn't something similar is done even in the
upgrade steps you shared once [1]https://knock.app/blog/zero-downtime-postgres-upgrades? Typically users should get all the
data from the publisher before the upgrade of the publisher via
creating a subscription. Also, it would be better to keep the
implementation of sequences close to tables wherever possible. Having
said that, I understand your point as well and if you strongly feel
that we don't need to sync sequences at the time of CREATE
SUBSCRIPTION and others also don't see any problem with it then we can
consider that as well.

Do you mean that sync the sequences during the REFRESH PUBLICATION
SEQUENCES command itself? If so, there is an argument that we can do
the same during CREATE SUBSCRIPTION. It would be beneficial to keep
the method to sync the sequences same for both the CREATE and REFRESH
commands. I have speculated on one idea above and would be happy to
see your thoughts.

I meant that the REFRESH PUBLICATION SEQUENCES command updates all
sequence states in pg_subscription_rel to 'init' state, and the
sequence-sync worker can do the synchronization work. We use the same
method for both the CREATE SUBSCRIPTION and REFRESH PUBLICATION
{SEQUENCES} commands.

Marking the state as 'init' when we would have already synced the
sequences sounds a bit odd but otherwise, this could also work if we
accept that even if the sequences are synced and value could remain in
'init' state (on rollbacks).

[1]: https://knock.app/blog/zero-downtime-postgres-upgrades

--
With Regards,
Amit Kapila.

#49Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#47)
Re: Logical Replication of sequences

On Fri, Jun 14, 2024 at 5:16 AM Michael Paquier <michael@paquier.xyz> wrote:

On Thu, Jun 13, 2024 at 03:36:05PM +0530, Amit Kapila wrote:

Fair enough. However, this raises the question Dilip and Vignesh are
discussing whether we need a new relfilenode for sequence update even
during initial sync? As per my understanding, the idea is that similar
to tables, the CREATE SUBSCRIPTION command (with copy_data = true)
will create the new sequence entries in pg_subscription_rel with the
state as 'i'. Then the sequence-sync worker would start a transaction
and one-by-one copy the latest sequence values for each sequence (that
has state as 'i' in pg_subscription_rel) and mark its state as ready
'r' and commit the transaction. Now if there is an error during this
operation it will restart the entire operation.

Hmm. You mean to use only one transaction for all the sequences?
I've heard about deployments with a lot of them. Could it be a
problem to process them in batches, as well?

I don't think so. We can even sync one sequence per transaction but
then it would be resource and time consuming without much gain. As
mentioned in a previous email, we might want to sync 100 or some other
threshold number of sequences per transaction. The other possibility
is to make a subscription-level option for this batch size but I don't
see much advantage in doing so as it won't be convenient for users to
set it. I feel we should pick some threshold number that is neither
too low nor too high and if we later see any problem with it, we can
make it a configurable knob.

The idea of creating a
new relfilenode is to handle the error so that if there is a rollback,
the sequence state will be rolled back to 'i' and the sequence value
will also be rolled back. The other option could be that we update the
sequence value without a new relfilenode and if the transaction rolled
back then only the sequence's state will be rolled back to 'i'. This
would work with a minor inconsistency that sequence values will be
up-to-date even when the sequence state is 'i' in pg_subscription_rel.
I am not sure if that matters because anyway, they can quickly be
out-of-sync with the publisher again.

Seeing a mention to relfilenodes specifically for sequences freaks me
out a bit, because there's some work I have been doing in this area
and sequences may not have a need for a physical relfilenode at all.
But I guess that you refer to the fact that like tables, relfilenodes
would only be created as required because anything you'd do in the
apply worker path would just call some of the routines of sequence.h,
right?

Yes, I think so. The only thing the patch expects is a way to rollback
the sequence changes if the transaction rolls back during the initial
sync. But I am not sure if we need such a behavior. The discussion for
the same is in progress. Let's wait for the outcome.

Now, say we don't want to maintain the state of sequences for initial
sync at all then after the error how will we detect if there are any
pending sequences to be synced? One possibility is that we maintain a
subscription level flag 'subsequencesync' in 'pg_subscription' to
indicate whether sequences need sync. This flag would indicate whether
to sync all the sequences in pg_susbcription_rel. This would mean that
if there is an error while syncing the sequences we will resync all
the sequences again. This could be acceptable considering the chances
of error during sequence sync are low.

There could be multiple subscriptions to a single database that point
to the same set of sequences. Is there any conflict issue to worry
about here?

I don't think so. In the worst case, the same value would be copied
twice. The same scenario in case of tables could lead to duplicate
data or unique key violation ERRORs which is much worse. So, I expect
users to be careful about the same.

The benefit is that both the
REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same
idea and sync all sequences without needing a new relfilenode. Users
can always refer 'subsequencesync' flag in 'pg_subscription' to see if
all the sequences are synced after executing the command.

That would be cheaper, indeed. Isn't a boolean too limiting?

In this idea, we only need a flag to say whether the sequence sync is
required or not.

Isn't that something you'd want to track with a LSN as "the point in
WAL where all the sequences have been synced"?

It won't be any better for the required purpose because after CREATE
SUBSCRIPTION, if REFERESH wants to toggle the flag to indicate the
sequences need sync again then using LSN would mean we need to set it
to Invalid value.

The approach of doing all the sync work from the subscriber, while
having a command that can be kicked from the subscriber side is a good
user experience.

Thank you for endorsing the idea.

--
With Regards,
Amit Kapila.

#50Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#48)
Re: Logical Replication of sequences

On Fri, Jun 14, 2024 at 4:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Jun 13, 2024 at 6:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Thu, Jun 13, 2024 at 7:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Jun 13, 2024 at 1:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Jun 12, 2024 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Yeah, starting with a single worker sounds good for now. Do you think
we should sync all the sequences in a single transaction or have some
threshold value above which a different transaction would be required
or maybe a different sequence sync worker altogether? Now, having
multiple sequence-sync workers requires some synchronization so that
only a single worker is allocated for one sequence.

The simplest thing is to use a single sequence sync worker that syncs
all sequences in one transaction but with a large number of sequences,
it could be inefficient. OTOH, I am not sure if it would be a problem
in reality.

I think that we can start with using a single worker and one
transaction, and measure the performance with a large number of
sequences.

Fair enough. However, this raises the question Dilip and Vignesh are
discussing whether we need a new relfilenode for sequence update even
during initial sync? As per my understanding, the idea is that similar
to tables, the CREATE SUBSCRIPTION command (with copy_data = true)
will create the new sequence entries in pg_subscription_rel with the
state as 'i'. Then the sequence-sync worker would start a transaction
and one-by-one copy the latest sequence values for each sequence (that
has state as 'i' in pg_subscription_rel) and mark its state as ready
'r' and commit the transaction. Now if there is an error during this
operation it will restart the entire operation. The idea of creating a
new relfilenode is to handle the error so that if there is a rollback,
the sequence state will be rolled back to 'i' and the sequence value
will also be rolled back. The other option could be that we update the
sequence value without a new relfilenode and if the transaction rolled
back then only the sequence's state will be rolled back to 'i'. This
would work with a minor inconsistency that sequence values will be
up-to-date even when the sequence state is 'i' in pg_subscription_rel.
I am not sure if that matters because anyway, they can quickly be
out-of-sync with the publisher again.

I think it would be fine in many cases even if the sequence value is
up-to-date even when the sequence state is 'i' in pg_subscription_rel.
But the case we would like to avoid is where suppose the sequence-sync
worker does both synchronizing sequence values and updating the
sequence states for all sequences in one transaction, and if there is
an error we end up retrying the synchronization for all sequences.

The one idea to avoid this is to update sequences in chunks (say 100
or some threshold number of sequences in one transaction). Then we
would only redo the sync for the last and pending set of sequences.

That could be one idea.

Now, say we don't want to maintain the state of sequences for initial
sync at all then after the error how will we detect if there are any
pending sequences to be synced? One possibility is that we maintain a
subscription level flag 'subsequencesync' in 'pg_subscription' to
indicate whether sequences need sync. This flag would indicate whether
to sync all the sequences in pg_susbcription_rel. This would mean that
if there is an error while syncing the sequences we will resync all
the sequences again. This could be acceptable considering the chances
of error during sequence sync are low. The benefit is that both the
REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same
idea and sync all sequences without needing a new relfilenode. Users
can always refer 'subsequencesync' flag in 'pg_subscription' to see if
all the sequences are synced after executing the command.

I think that REFRESH PUBLICATION {SEQUENCES} can be executed even
while the sequence-sync worker is synchronizing sequences. In this
case, the worker might not see new sequences added by the concurrent
REFRESH PUBLICATION {SEQUENCES} command since it's already running.
The worker could end up marking the subsequencesync as completed while
not synchronizing these new sequences.

This is possible but we could avoid REFRESH PUBLICATION {SEQUENCES} by
not allowing to change the subsequencestate during the time
sequence-worker is syncing the sequences. This could be restrictive
but there doesn't seem to be cases where user would like to
immediately refresh sequences after creating the subscription.

I'm concerned that users would not be able to add sequences during the
time the sequence-worker is syncing the sequences. For example,
suppose we have 10000 sequences and execute REFRESH PUBLICATION
{SEQUENCES} to synchronize 10000 sequences. Now if we add one sequence
to the publication and want to synchronize it to the subscriber, we
have to wait for the current REFRESH PUBLICATION {SEQUENCES} to
complete, and then execute it again, synchronizing 10001 sequences,
instead of synchronizing only the new one.

Or yet another idea I came up with is that a tablesync worker will
synchronize both the table and sequences owned by the table. That is,
after the tablesync worker caught up with the apply worker, the
tablesync worker synchronizes sequences associated with the target
table as well. One benefit would be that at the time of initial table
sync being completed, the table and its sequence data are consistent.

Correction; it's not guaranteed that the sequence data and table data
are consistent even in this case since the tablesync worker could get
on-disk sequence data that might have already been updated.

The benefit of this approach is not clear to me. Our aim is to sync
all sequences before the upgrade, so not sure if this helps because
anyway both table values and corresponding sequences can again be
out-of-sync very quickly.

Right.

Given that our aim is to sync all sequences before the upgrade, do we
need to synchronize sequences even at CREATE SUBSCRIPTION time? In
cases where there are a large number of sequences, synchronizing
sequences in addition to tables could be overhead and make less sense,
because sequences can again be out-of-sync quickly and typically
CREATE SUBSCRIPTION is not created just before the upgrade.

I think for the upgrade one should be creating a subscription just
before the upgrade. Isn't something similar is done even in the
upgrade steps you shared once [1]?

I might be missing something but in the blog post they created
subscriptions in various ways, waited for the initial table data sync
to complete, and then set the sequence values with a buffer based on
the old cluster. What I imagined with this sequence synchronization
feature is that after the initial table sync completes, we stop to
execute further transactions on the publisher, synchronize sequences
using REFRESH PUBLICATION {SEQUENCES}, and resume the application to
execute transactions on the subscriber. So a subscription would be
created just before the upgrade, but sequence synchronization would
not necessarily happen at the same time of the initial table data
synchronization.

Typically users should get all the
data from the publisher before the upgrade of the publisher via
creating a subscription. Also, it would be better to keep the
implementation of sequences close to tables wherever possible. Having
said that, I understand your point as well and if you strongly feel
that we don't need to sync sequences at the time of CREATE
SUBSCRIPTION and others also don't see any problem with it then we can
consider that as well.

I see your point that it's better to keep the implementation of
sequences close to the table one. So I agree that we can start with
this approach, and we will see how it works in practice and consider
other options later.

Do you mean that sync the sequences during the REFRESH PUBLICATION
SEQUENCES command itself? If so, there is an argument that we can do
the same during CREATE SUBSCRIPTION. It would be beneficial to keep
the method to sync the sequences same for both the CREATE and REFRESH
commands. I have speculated on one idea above and would be happy to
see your thoughts.

I meant that the REFRESH PUBLICATION SEQUENCES command updates all
sequence states in pg_subscription_rel to 'init' state, and the
sequence-sync worker can do the synchronization work. We use the same
method for both the CREATE SUBSCRIPTION and REFRESH PUBLICATION
{SEQUENCES} commands.

Marking the state as 'init' when we would have already synced the
sequences sounds a bit odd but otherwise, this could also work if we
accept that even if the sequences are synced and value could remain in
'init' state (on rollbacks).

I mean that it's just for identifying sequences that need to be
synced. With the idea of using sequence states in pg_subscription_rel,
the REFRESH PUBLICATION SEQUENCES command needs to change states to
something so that the sequence-sync worker can identify which sequence
needs to be synced. If 'init' sounds odd, we can invent a new state
for sequences, say 'needs-to-be-syned'.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#51Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#50)
Re: Logical Replication of sequences

On Tue, Jun 18, 2024 at 7:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Fri, Jun 14, 2024 at 4:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Jun 13, 2024 at 6:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Now, say we don't want to maintain the state of sequences for initial
sync at all then after the error how will we detect if there are any
pending sequences to be synced? One possibility is that we maintain a
subscription level flag 'subsequencesync' in 'pg_subscription' to
indicate whether sequences need sync. This flag would indicate whether
to sync all the sequences in pg_susbcription_rel. This would mean that
if there is an error while syncing the sequences we will resync all
the sequences again. This could be acceptable considering the chances
of error during sequence sync are low. The benefit is that both the
REFRESH PUBLICATION SEQUENCES and CREATE SUBSCRIPTION can use the same
idea and sync all sequences without needing a new relfilenode. Users
can always refer 'subsequencesync' flag in 'pg_subscription' to see if
all the sequences are synced after executing the command.

I think that REFRESH PUBLICATION {SEQUENCES} can be executed even
while the sequence-sync worker is synchronizing sequences. In this
case, the worker might not see new sequences added by the concurrent
REFRESH PUBLICATION {SEQUENCES} command since it's already running.
The worker could end up marking the subsequencesync as completed while
not synchronizing these new sequences.

This is possible but we could avoid REFRESH PUBLICATION {SEQUENCES} by
not allowing to change the subsequencestate during the time
sequence-worker is syncing the sequences. This could be restrictive
but there doesn't seem to be cases where user would like to
immediately refresh sequences after creating the subscription.

I'm concerned that users would not be able to add sequences during the
time the sequence-worker is syncing the sequences. For example,
suppose we have 10000 sequences and execute REFRESH PUBLICATION
{SEQUENCES} to synchronize 10000 sequences. Now if we add one sequence
to the publication and want to synchronize it to the subscriber, we
have to wait for the current REFRESH PUBLICATION {SEQUENCES} to
complete, and then execute it again, synchronizing 10001 sequences,
instead of synchronizing only the new one.

I see your point and it could hurt such scenarios even though they
won't be frequent. So, let's focus on our other approach of
maintaining the flag at a per-sequence level in pg_subscription_rel.

Or yet another idea I came up with is that a tablesync worker will
synchronize both the table and sequences owned by the table. That is,
after the tablesync worker caught up with the apply worker, the
tablesync worker synchronizes sequences associated with the target
table as well. One benefit would be that at the time of initial table
sync being completed, the table and its sequence data are consistent.

Correction; it's not guaranteed that the sequence data and table data
are consistent even in this case since the tablesync worker could get
on-disk sequence data that might have already been updated.

The benefit of this approach is not clear to me. Our aim is to sync
all sequences before the upgrade, so not sure if this helps because
anyway both table values and corresponding sequences can again be
out-of-sync very quickly.

Right.

Given that our aim is to sync all sequences before the upgrade, do we
need to synchronize sequences even at CREATE SUBSCRIPTION time? In
cases where there are a large number of sequences, synchronizing
sequences in addition to tables could be overhead and make less sense,
because sequences can again be out-of-sync quickly and typically
CREATE SUBSCRIPTION is not created just before the upgrade.

I think for the upgrade one should be creating a subscription just
before the upgrade. Isn't something similar is done even in the
upgrade steps you shared once [1]?

I might be missing something but in the blog post they created
subscriptions in various ways, waited for the initial table data sync
to complete, and then set the sequence values with a buffer based on
the old cluster. What I imagined with this sequence synchronization
feature is that after the initial table sync completes, we stop to
execute further transactions on the publisher, synchronize sequences
using REFRESH PUBLICATION {SEQUENCES}, and resume the application to
execute transactions on the subscriber. So a subscription would be
created just before the upgrade, but sequence synchronization would
not necessarily happen at the same time of the initial table data
synchronization.

It depends on the exact steps of the upgrade. For example, if one
stops the publisher before adding sequences to a subscription either
via create subscription or alter subscription add/set command then
there won't be a need for a separate refresh but OTOH, if one follows
the steps you mentioned then the refresh would be required. As you are
okay, with syncing the sequences while creating a subscription in the
below part of the email, there is not much point in arguing about this
further.

Typically users should get all the
data from the publisher before the upgrade of the publisher via
creating a subscription. Also, it would be better to keep the
implementation of sequences close to tables wherever possible. Having
said that, I understand your point as well and if you strongly feel
that we don't need to sync sequences at the time of CREATE
SUBSCRIPTION and others also don't see any problem with it then we can
consider that as well.

I see your point that it's better to keep the implementation of
sequences close to the table one. So I agree that we can start with
this approach, and we will see how it works in practice and consider
other options later.

makes sense.

Marking the state as 'init' when we would have already synced the
sequences sounds a bit odd but otherwise, this could also work if we
accept that even if the sequences are synced and value could remain in
'init' state (on rollbacks).

I mean that it's just for identifying sequences that need to be
synced. With the idea of using sequence states in pg_subscription_rel,
the REFRESH PUBLICATION SEQUENCES command needs to change states to
something so that the sequence-sync worker can identify which sequence
needs to be synced. If 'init' sounds odd, we can invent a new state
for sequences, say 'needs-to-be-syned'.

Agreed and I am not sure which is better because there is a value in
keeping the state name the same for both sequences and tables. We
probably need more comments in code and doc updates to make the
behavior clear. We can start with the sequence state as 'init' for
'needs-to-be-sycned' and 'ready' for 'synced' and can change if others
feel so during the review.

--
With Regards,
Amit Kapila.

#52vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#51)
3 attachment(s)
Re: Logical Replication of sequences

On Tue, 18 Jun 2024 at 16:10, Amit Kapila <amit.kapila16@gmail.com> wrote:

Agreed and I am not sure which is better because there is a value in
keeping the state name the same for both sequences and tables. We
probably need more comments in code and doc updates to make the
behavior clear. We can start with the sequence state as 'init' for
'needs-to-be-sycned' and 'ready' for 'synced' and can change if others
feel so during the review.

Here is a patch which does the sequence synchronization in the
following lines from the above discussion:
This commit introduces sequence synchronization during 1) creation of
subscription for initial sync of sequences 2) refresh publication to
synchronize the sequences for the newly created sequences 3) refresh
publication sequences for synchronizing all the sequences.
1) During subscription creation with CREATE SUBSCRIPTION (no syntax change):
- The subscriber retrieves sequences associated with publications.
- Sequences are added in the 'init' state to the pg_subscription_rel table.
- Sequence synchronization worker will be started if there are any
sequences to be synchronized
- A new sequence synchronization worker handles synchronization in
batches of 100 sequences:
a) Retrieves sequence values using pg_sequence_state from the publisher.
b) Sets sequence values accordingly.
c) Updates sequence state to 'READY' in pg_susbcripion_rel
d) Commits batches of 100 synchronized sequences.
2) Refreshing sequences with ALTER SUBSCRIPTION ... REFRESH
PUBLICATION (no syntax change):
- Stale sequences are removed from pg_subscription_rel.
- Newly added sequences in the publisher are added in 'init' state
to pg_subscription_rel.
- Sequence synchronization will be done by sequence sync worker as
listed in subscription creation process.
- Sequence synchronization occurs for newly added sequences only.
3) Introduce new command ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES for refreshing all sequences:
- Removes stale sequences and adds newly added sequences from the
publisher to pg_subscription_rel.
- Resets all sequences in pg_subscription_rel to 'init' state.
- Initiates sequence synchronization for all sequences by sequence
sync worker as listed in subscription creation process.

Regards,
Vignesh

Attachments:

v20240619-0001-Introduce-pg_sequence_state-and-SetSequenc.patchtext/x-patch; charset=US-ASCII; name=v20240619-0001-Introduce-pg_sequence_state-and-SetSequenc.patchDownload
From f484d6aafe69d5452dcd1e56d32774707a6dc68b Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240619 1/3] Introduce pg_sequence_state and SetSequence
 functions for enhanced sequence management

This patch introduces new functionalities to PostgreSQL:
- pg_sequence_state allows retrieval of sequence values using LSN.
- SetSequence enables updating sequences with user-specified values.
---
 src/backend/commands/sequence.c | 161 ++++++++++++++++++++++++++++++--
 src/include/catalog/pg_proc.dat |   8 ++
 src/include/commands/sequence.h |   1 +
 3 files changed, 162 insertions(+), 8 deletions(-)

diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 28f8522264..57453a7356 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,80 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ */
+void
+SetSequence(Oid seq_relid, int64 value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page' buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* check the comment above nextval_internal()'s equivalent call. */
+	if (RelationNeedsWAL(seqrel))
+	{
+		GetTopTransactionId();
+
+		if (XLogLogicalInfoActive())
+			GetCurrentTransactionId();
+	}
+
+	/* ready to change the on-disk (or really, in-buffer) tuple */
+	START_CRIT_SECTION();
+
+	seq->last_value = value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	MarkBufferDirty(buf);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(seqrel))
+	{
+		xl_seq_rec      xlrec;
+		XLogRecPtr      recptr;
+		Page            page = BufferGetPage(buf);
+
+		XLogBeginInsert();
+		XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+		xlrec.locator = seqrel->rd_locator;
+
+		XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+		XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+		/* allow filtering by origin on a sequence update */
+		XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
+
+		recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buf);
+
+	/* Clear local cache so that we don't think we have cached numbers */
+	/* Note that we do not change the currval() state */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +552,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -551,7 +627,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -680,7 +756,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -976,7 +1052,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1180,7 +1256,8 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn)
 {
 	Page		page;
 	ItemId		lp;
@@ -1197,6 +1274,13 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/*
+	 * If the caller requested it, set the page LSN. This allows deciding
+	 * which sequence changes are before/after the returned sequence state.
+	 */
+	if (lsn)
+		*lsn = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1804,7 +1888,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1819,6 +1903,67 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+	XLogRecPtr	lsn;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4];
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	is_called = seq->is_called;
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	values[0] = LSNGetDatum(lsn);
+	values[1] = Int64GetDatum(last_value);
+	values[2] = Int64GetDatum(log_cnt);
+	values[3] = BoolGetDatum(is_called);
+
+	memset(nulls, 0, sizeof(nulls));
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 6a5476d3c4..990ef2f836 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..fad731a733 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid seq_relid, int64 value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
-- 
2.34.1

v20240619-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240619-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From c1c0c2cd5913b6f2b335325fc1145f517f3bd74c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240619 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications. This improvement facilitates seamless
synchronization of sequence data during operations such as
CREATE SUBSCRIPTION, REFRESH PUBLICATION, and REFRESH PUBLICATION SEQUENCES.

Additionally, a new system view, pg_publication_sequences, has been
introduced to list all sequences added to a publication. Furthermore,
enhancements to psql commands (\d and \dRp) now allow for better display
of publications containing specific sequences or sequences included in a
publication.
---
 doc/src/sgml/ref/create_publication.sgml  |  23 +-
 doc/src/sgml/system-views.sgml            |  67 ++++
 src/backend/catalog/pg_publication.c      |  86 ++++-
 src/backend/catalog/system_views.sql      |  10 +
 src/backend/commands/publicationcmds.c    |  54 ++-
 src/backend/parser/gram.y                 |  22 +-
 src/bin/pg_dump/pg_dump.c                 |  21 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  11 +
 src/bin/psql/describe.c                   | 218 ++++++++---
 src/bin/psql/tab-complete.c               |   4 +-
 src/include/catalog/pg_proc.dat           |   5 +
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |   6 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 449 ++++++++++++----------
 src/test/regress/expected/rules.out       |   8 +
 src/test/regress/sql/publication.sql      |  15 +
 18 files changed, 710 insertions(+), 303 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..107ce63b2b 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -119,10 +124,11 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
    <varlistentry id="sql-createpublication-params-for-all-tables">
     <term><literal>FOR ALL TABLES</literal></term>
+    <term><literal>FOR ALL SEQUENCES</literal></term>
     <listitem>
      <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
+      Marks the publication as one that replicates changes for all tables or
+      sequences in the database, including tables created in the future.
      </para>
     </listitem>
    </varlistentry>
@@ -240,10 +246,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR ALL SEQUENCES</literal>,or <literal>FOR TABLES IN SCHEMA</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,7 +265,8 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR ALL SEQUENCES</command>, and
    <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
    user to be a superuser.
   </para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 8c18bea902..7491d50dc4 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2138,6 +2143,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..175caf23d0 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
@@ -1254,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index efb29adeb3..1057946dc0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -398,6 +398,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..475256f9e4 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -727,6 +727,7 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 ObjectAddress
 CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 {
+	ListCell   *lc;
 	Relation	rel;
 	ObjectAddress myself;
 	Oid			puboid;
@@ -741,6 +742,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	bool		for_all_tables = false;
+	bool		for_all_sequences = false;
+
+	/*
+	 * Translate the list of object types (represented by strings) to bool
+	 * flags.
+	 */
+	foreach(lc, stmt->for_all_objects)
+	{
+		char	   *val = strVal(lfirst(lc));
+
+		if (strcmp(val, "tables") == 0)
+			for_all_tables = true;
+		else if (strcmp(val, "sequences") == 0)
+			for_all_sequences = true;
+	}
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
@@ -748,11 +766,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 					   get_database_name(MyDatabaseId));
 
 	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	if (for_all_tables && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 				 errmsg("must be superuser to create FOR ALL TABLES publication")));
 
+	/* FOR ALL SEQUENCES requires superuser */
+	if (for_all_sequences && !superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("must be superuser to create FOR ALL SEQUENCES publication")));
+
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
 	/* Check if name is used */
@@ -782,7 +806,9 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+		BoolGetDatum(for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,12 +834,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (for_all_tables || for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+
+	/*
+	 * If the publication might have either tables or sequences (directly or
+	 * through a schema), process that.
+	 */
+	if (!for_all_tables || !for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1008,7 +1039,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1525,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1749,7 +1780,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, rels)
 	{
@@ -1828,7 +1859,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, schemas)
 	{
@@ -1919,6 +1950,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 							NameStr(form->pubname)),
 					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
 
+		if (form->puballsequences && !superuser_arg(newOwnerId))
+			ereport(ERROR,
+					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					 errmsg("permission denied to change owner of publication \"%s\"",
+							NameStr(form->pubname)),
+					 errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 4d582950b7..d99285dfa3 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -455,7 +455,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <node>	pub_obj_type
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10557,6 +10558,8 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION FOR ALL TABLES [WITH options]
  *
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
+ *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
  * pub_obj is one of:
@@ -10575,13 +10578,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
 					n->options = $7;
-					n->for_all_tables = true;
+					n->for_all_objects = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10696,19 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+pub_obj_type:	TABLES
+					{ $$ = (Node *) makeString("tables"); }
+				| SEQUENCES
+					{ $$ = (Node *) makeString("sequences"); }
+	;
+
+pub_obj_type_list:	pub_obj_type
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' pub_obj_type
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e324070828..953b9128ad 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4185,6 +4185,7 @@ getPublications(Archive *fout, int *numPublications)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4204,23 +4205,29 @@ getPublications(Archive *fout, int *numPublications)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 170000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4232,6 +4239,7 @@ getPublications(Archive *fout, int *numPublications)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4251,6 +4259,8 @@ getPublications(Archive *fout, int *numPublications)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4304,6 +4314,9 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
 
+	if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
 	{
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 865823868f..976115ab3d 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..ace1d1b661 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,17 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index f67bf0b892..482c756b3a 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,52 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* print any publications */
+		if (pset.sversion >= 170000)
+		{
+			int			tuples = 0;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+			else
+				tuples = PQntuples(result);
+
+			if (tuples > 0)
+				printTableAddFooter(&cont, _("Publications:"));
+
+			/* Might be an empty set - that's ok */
+			for (i = 0; i < tuples; i++)
+			{
+				printfPQExpBuffer(&buf, "    \"%s\"",
+								  PQgetvalue(result, i, 0));
+
+				printTableAddFooter(&cont, buf.data);
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2110,11 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6219,7 +6281,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6233,19 +6295,37 @@ listPublications(const char *pattern)
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT pubname AS \"%s\",\n"
-					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
-					  gettext_noop("Name"),
-					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
-					  gettext_noop("Inserts"),
-					  gettext_noop("Updates"),
-					  gettext_noop("Deletes"));
+	if (pset.sversion >= 170000)
+		printfPQExpBuffer(&buf,
+						  "SELECT pubname AS \"%s\",\n"
+						  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+						  "  puballtables AS \"%s\",\n"
+						  "  puballsequences AS \"%s\",\n"
+						  "  pubinsert AS \"%s\",\n"
+						  "  pubupdate AS \"%s\",\n"
+						  "  pubdelete AS \"%s\"",
+						  gettext_noop("Name"),
+						  gettext_noop("Owner"),
+						  gettext_noop("All tables"),
+						  gettext_noop("All sequences"),
+						  gettext_noop("Inserts"),
+						  gettext_noop("Updates"),
+						  gettext_noop("Deletes"));
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT pubname AS \"%s\",\n"
+						  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+						  "  puballtables AS \"%s\",\n"
+						  "  pubinsert AS \"%s\",\n"
+						  "  pubupdate AS \"%s\",\n"
+						  "  pubdelete AS \"%s\"",
+						  gettext_noop("Name"),
+						  gettext_noop("Owner"),
+						  gettext_noop("All tables"),
+						  gettext_noop("Inserts"),
+						  gettext_noop("Updates"),
+						  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6343,6 +6423,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6359,6 +6440,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 170000);
 
 	initPQExpBuffer(&buf);
 
@@ -6372,6 +6454,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6423,6 +6509,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols ++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6430,6 +6518,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6440,6 +6530,10 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
+
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index d453e224d9..f1ee348909 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,9 +3159,9 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
 		COMPLETE_WITH("TABLES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 990ef2f836..8e68adb01f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11945,6 +11945,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..ee7b875831 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..62d1cf47e2 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4168,7 +4168,8 @@ typedef struct CreatePublicationStmt
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
@@ -4191,7 +4192,8 @@ typedef struct AlterPublicationStmt
 	 * objects.
 	 */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 	AlterPublicationAction action;	/* What action to perform with the given
 									 * objects */
 } AlterPublicationStmt;
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..d54008ae6f 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,53 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+         pubname         | puballtables | puballsequences 
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f            | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+                       Sequence "pub_test.testpub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_forallsequences"
+
+\dRp+ testpub_forallsequences
+                                    Publication testpub_forallsequences
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +272,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +290,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +322,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +338,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +357,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +368,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +404,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +417,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +535,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +752,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +939,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1147,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1188,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1269,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1282,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1311,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1337,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1408,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1419,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1440,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1452,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1464,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1475,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1486,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1497,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1528,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1540,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1622,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1643,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 13178e2b3d..1c646f15a1 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1441,6 +1441,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..e9fc959c8b 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,21 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
+
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
-- 
2.34.1

v20240619-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240619-0003-Enhance-sequence-synchronization-during-su.patchDownload
From 86fcb37fa0ff508fad9516fc8372dbd47901546e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 19 Jun 2024 14:58:14 +0530
Subject: [PATCH v20240619 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
	ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  11 +
 src/backend/catalog/pg_publication.c          |   2 +-
 src/backend/catalog/pg_subscription.c         |  64 +++
 src/backend/commands/subscriptioncmds.c       | 270 +++++++++++-
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |   9 +
 src/backend/postmaster/bgworker.c             |   3 +
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  51 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 400 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 140 +++++-
 src/backend/replication/logical/worker.c      |  12 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_subscription.h         |   6 +
 src/include/catalog/pg_subscription_rel.h     |   1 +
 src/include/nodes/parsenodes.h                |   1 +
 src/include/replication/logicalworker.h       |   1 +
 src/include/replication/worker_internal.h     |  16 +
 src/test/subscription/t/034_sequences.pl      | 145 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 24 files changed, 1126 insertions(+), 28 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 698169afdb..677abb57f2 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5200,8 +5200,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 746d5bd330..5d9d6f3e50 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1984,8 +1984,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the table synchronization workers, sequence
+    synchronization worker and parallel apply workers.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index b2ad9b446f..5f0170272f 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2009,8 +2009,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 476f195622..fc8a33c0b5 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -27,6 +27,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SKIP ( <replaceable class="parameter">skip_option</replaceable> = <replaceable class="parameter">value</replaceable> )
@@ -194,6 +195,16 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequences information from publisher and re-synchronize the
+      sequence data from the publisher.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 175caf23d0..e18590466d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1001,7 +1001,7 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 /*
  * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
  */
-List *
+static List *
 GetAllSequencesPublicationRelations(void)
 {
 	Relation	classRel;
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..7673f1384c 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -551,3 +552,66 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 	return res;
 }
+
+
+/*
+ * Get the sequences for the subscription.
+ *
+ * The returned list is palloc'ed in the current memory context.
+ */
+List *
+GetSubscriptionSequences(Oid subid, char state)
+{
+	List	   *res = NIL;
+	Relation	rel;
+	HeapTuple	tup;
+	int			nkeys = 0;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	MemoryContext oldctx;
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[nkeys++],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	if (state != '\0')
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(state));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, nkeys, skey);
+
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subseq;
+		SubscriptionRelState *seqinfo;
+		Datum		d;
+		bool		isnull;
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		subseq = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		seqinfo = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
+		seqinfo->relid = subseq->srrelid;
+		d = SysCacheGetAttr(SUBSCRIPTIONRELMAP, tup,
+							Anum_pg_subscription_rel_srsublsn, &isnull);
+		if (isnull)
+			seqinfo->lsn = InvalidXLogRecPtr;
+		else
+			seqinfo->lsn = DatumGetLSN(d);
+
+		res = lappend(res, seqinfo);
+		MemoryContextSwitchTo(oldctx);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	return res;
+}
\ No newline at end of file
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e407428dbc..d2d02a75d8 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -102,6 +102,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -759,6 +760,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List *sequences;
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -769,6 +771,23 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 */
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
+			/* Add sequences */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach(lc, sequences)
+			{
+				RangeVar   *rv = (RangeVar *) lfirst(lc);
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * Get the table list from publisher and build local table status
 			 * info.
@@ -898,6 +917,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		/* Get the table list from publisher. */
 		pubrel_names = fetch_table_list(wrconn, sub->publications);
 
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names, fetch_sequence_list(wrconn, sub->publications));
+
 		/* Get local table list. */
 		subrel_states = GetSubscriptionRelations(sub->oid, false);
 		subrel_count = list_length(subrel_states);
@@ -980,6 +1002,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1006,13 +1029,15 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				/* Stop the worker if relation kind is not sequence*/
+				if (relkind != RELKIND_SEQUENCE)
+					logicalrep_worker_stop(sub->oid, relid);
 
 				/*
 				 * For READY state, we would have already dropped the
 				 * tablesync origin.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)
 				{
 					char		originname[NAMEDATALEN];
 
@@ -1047,7 +1072,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		for (off = 0; off < remove_rel_len; off++)
 		{
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE &&
+				get_rel_relkind(sub_remove_rels[off].relid) != RELKIND_SEQUENCE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1077,6 +1103,147 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Refresh the sequences data of the subscription.
+ */
+static void
+AlterSubscription_refreshsequences(Subscription *sub)
+{
+	char	   *err;
+	List	   *pubseq_names = NIL;
+	List	   *subseq_states;
+	Oid		   *subseq_local_oids;
+	Oid		   *pubseq_local_oids;
+	ListCell   *lc;
+	int			off;
+	int			subrel_count;
+	Relation	rel = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	PG_TRY();
+	{
+		/* Get the sequences from the publisher. */
+		pubseq_names = fetch_sequence_list(wrconn, sub->publications);
+
+		/* Get local sequence list. */
+		subseq_states = GetSubscriptionSequences(sub->oid, '\0');
+		subrel_count = list_length(subseq_states);
+
+		/*
+		 * Build qsorted array of local table oids for faster lookup. This can
+		 * potentiallGetSubscriptionRelationsy contain all tables in the database so speed of lookup
+		 * is important.
+		 */
+		subseq_local_oids = palloc(subrel_count * sizeof(Oid));
+		off = 0;
+		foreach(lc, subseq_states)
+		{
+			SubscriptionSeqInfo *seqinfo = (SubscriptionSeqInfo *) lfirst(lc);
+
+			subseq_local_oids[off++] = seqinfo->seqid;
+		}
+
+		qsort(subseq_local_oids, subrel_count, sizeof(Oid), oid_cmp);
+
+		/*
+		 * Walk over the remote tables and try to match them to locally known
+		 * tables. If the table is not known locally create a new state for
+		 * it.
+		 *
+		 * Also builds array of local oids of remote tables for the next step.
+		 */
+		off = 0;
+		pubseq_local_oids = palloc(list_length(pubseq_names) * sizeof(Oid));
+
+		foreach(lc, pubseq_names)
+		{
+			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			Oid			relid;
+
+			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+			/* Check for supported relkind. */
+			CheckSubscriptionRelkind(get_rel_relkind(relid),
+									 rv->schemaname, rv->relname);
+
+			pubseq_local_oids[off++] = relid;
+
+			if (!bsearch(&relid, subseq_local_oids,
+						 subrel_count, sizeof(Oid), oid_cmp))
+			{
+				AddSubscriptionRelState(sub->oid, relid,
+										SUBREL_STATE_INIT,
+										InvalidXLogRecPtr, true);
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" added to subscription \"%s\"",
+										 rv->schemaname, rv->relname, sub->name)));
+			}
+		}
+
+		/*
+		 * Next remove state for tables we should not care about anymore using
+		 * the data we collected above
+		 */
+		qsort(pubseq_local_oids, list_length(pubseq_names),
+			  sizeof(Oid), oid_cmp);
+
+		for (off = 0; off < subrel_count; off++)
+		{
+			Oid			relid = subseq_local_oids[off];
+
+			if (!bsearch(&relid, pubseq_local_oids,
+						 list_length(pubseq_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * This locking ensures that the state of rels won't change
+				 * till we are done with this refresh operation.
+				 */
+				if (!rel)
+					rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+				RemoveSubscriptionRel(sub->oid, relid);
+
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+			}
+			else
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
+			}
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+
+	if (rel)
+		table_close(rel, NoLock);
+}
+
 /*
  * Alter the existing subscription.
  */
@@ -1404,6 +1571,20 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refreshsequences(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_REFRESH:
 			{
 				if (!sub->enabled)
@@ -2060,11 +2241,17 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
+		char	   *schemaname;
+		char	   *tablename;
+
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			schemaname = get_namespace_name(get_rel_namespace(relid));
+			tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2234,6 +2421,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	ListCell   *lc;
+	bool		first;
+	List	   *tablelist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "  FROM pg_catalog.pg_publication_sequences s\n"
+						   " WHERE s.pubname IN (");
+	first = true;
+	foreach(lc, publications)
+	{
+		char	   *pubname = strVal(lfirst(lc));
+
+		if (first)
+			first = false;
+		else
+			appendStringInfoString(&cmd, ", ");
+
+		appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+	}
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of replicated sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		tablelist = lappend(tablelist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return tablelist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index d99285dfa3..78acd3a0d2 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10823,6 +10823,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index fa52b6dfa8..4b4b840e57 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -132,6 +132,9 @@ static const struct
 	},
 	{
 		"TablesyncWorkerMain", TablesyncWorkerMain
+	},
+	{
+		"SequencesyncWorkerMain", SequencesyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 27c3a91fb7..066caa9b6b 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -267,6 +267,39 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 	return res;
 }
 
+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the leader apply worker or table sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
+{
+	int			i;
+	LogicalRepWorker *res = NULL;
+
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	/* Search for attached worker for a given subscription id. */
+	for (i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		/* Skip parallel apply workers. */
+		if (!isSequencesyncWorker(w))
+			continue;
+
+		if (w->in_use && w->subid == subid && (only_running && w->proc))
+		{
+			res = w;
+			break;
+		}
+	}
+
+	return res;
+}
+
 /*
  * Similar to logicalrep_worker_find(), but returns a list of all workers for
  * the subscription, instead of just one.
@@ -311,6 +344,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -320,7 +354,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - parallel apply worker is the only kind of subworker
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
-	Assert(is_tablesync_worker == OidIsValid(relid));
+	Assert(is_tablesync_worker == OidIsValid(relid) || is_sequencesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
 
 	ereport(DEBUG1,
@@ -396,7 +430,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -491,6 +526,15 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequencesyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u sync %u",
+					 subid,
+					 relid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -1351,6 +1395,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..36440cf9eb
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,400 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/indexing.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "executor/executor.h"
+#include "nodes/makefuncs.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "storage/lmgr.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Fetch sequence data (current state) from the remote node, including the
+ * page LSN.
+ */
+static int64
+fetch_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {INT8OID, LSNOID};
+	int64		value = (Datum) 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT (last_value + log_cnt), page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of replicated tables from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		bool		isnull;
+
+		value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		*lsn = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char			relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch table info for table \"%s.%s\" from publisher: %s",
+						nspname, RelationGetRelationName(rel), res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("table \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	/*
+	 * Logical replication of sequences is based on decoding WAL records,
+	 * describing the "next" state of the sequence the current state in the
+	 * relfilenode is yet to reach. But during the initial sync we read the
+	 * current state, so we need to reconstruct the WAL record logged when we
+	 * started the current batch of sequence values.
+	 *
+	 * Otherwise we might get duplicate values (on subscriber) if we failed
+	 * over right after the sync.
+	 */
+	sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
+
+	/* sets the sequences in non-transactional way */
+	SetSequence(RelationGetRelid(rel), sequence_value);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Copy subscription's sequence data from the publisher.
+ */
+void
+copy_subscription_sequences(WalReceiverConn *conn, Oid subid, List *sequences)
+{
+	WalRcvExecResult *res;
+	char		slotname[NAMEDATALEN] = {0};
+	ListCell   *lc;
+
+	/*
+	 * Start a transaction in the remote node in REPEATABLE READ mode.  This
+	 * ensures that both the replication slot we create (see below) and the
+	 * COPY are consistent with each other.
+	 */
+	res = walrcv_exec(conn,
+						"BEGIN READ ONLY ISOLATION LEVEL REPEATABLE READ",
+						0, NULL);
+	if (res->status != WALRCV_OK_COMMAND)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence copy could not start transaction on publisher: %s",
+						res->err)));
+	walrcv_clear_result(res);
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	foreach(lc, sequences)
+	{
+		RangeVar   *rv = (RangeVar *) lfirst(lc);
+		Oid			relid;
+		XLogRecPtr	sequence_lsn = InvalidXLogRecPtr;
+		Relation	sequencerel;
+
+		relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+		/* Check for supported relkind. */
+		CheckSubscriptionRelkind(get_rel_relkind(relid),
+								 rv->schemaname, rv->relname);
+
+		sequencerel = table_open(relid, RowExclusiveLock);
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or superuser,
+		 * who has it implicitly), but other roles should not be able to
+		 * circumvent RLS.  Disallow logical replication into RLS enabled
+		 * relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into relation with row-level security enabled: \"%s\"",
+							GetUserNameFromId(GetUserId(), true),
+							RelationGetRelationName(sequencerel))));
+
+		sequence_lsn = copy_sequence(conn, sequencerel);
+
+		UpdateSubscriptionRelState(subid, relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		ereport(LOG,
+				errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+					   get_subscription_name(subid, false), RelationGetRelationName(sequencerel)));
+		table_close(sequencerel, NoLock);
+	}
+
+	res = walrcv_exec(conn, "COMMIT", 0, NULL);
+	if (res->status != WALRCV_OK_COMMAND)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence copy could not finish transaction on publisher: %s",
+						res->err));
+	walrcv_clear_result(res);
+}
+
+
+/*
+ * Start syncing the table in the sync worker.
+ *
+ * If nothing needs to be done to sync the table, we exit the worker without
+ * any further action.
+ *
+ * The returned slot name is palloc'ed in current memory context.
+ */
+static void
+LogicalRepSyncSeqeunces()
+{
+	char	   *err;
+	bool		must_use_password;
+	List *sequences;
+	char	   slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner;
+	ListCell *lc;
+	int 		currseq = 0;
+	Oid			subid = MyLogicalRepWorker->subid;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	/* Check the state of the table synchronization. */
+	StartTransactionCommand();
+	sequences = GetSubscriptionSequences(subid,
+										 SUBREL_STATE_INIT);
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+
+	foreach(lc, sequences)
+	{
+		SubscriptionRelState *seqinfo = (SubscriptionRelState *) lfirst(lc);
+		Relation	sequencerel;
+		XLogRecPtr	sequence_lsn;
+
+		if (currseq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)
+			StartTransactionCommand();
+
+		sequencerel = table_open(seqinfo->relid, RowExclusiveLock);\
+
+		/*
+		 * Make sure that the copy command runs as the table owner, unless the
+		 * user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequencerel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our table sync worker has permission to insert into the
+		 * target table.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequencerel), GetUserId(),
+									ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						get_relkind_objtype(sequencerel->rd_rel->relkind),
+						RelationGetRelationName(sequencerel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or superuser,
+		 * who has it implicitly), but other roles should not be able to
+		 * circumvent RLS.  Disallow logical replication into RLS enabled
+		 * relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into relation with row-level security enabled: \"%s\"",
+							GetUserNameFromId(GetUserId(), true),
+							RelationGetRelationName(sequencerel))));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequencerel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+		ereport(LOG,
+				errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+					   get_subscription_name(subid, false), RelationGetRelationName(sequencerel)));
+		table_close(sequencerel, NoLock);
+
+		currseq++;
+
+		if (currseq % MAX_SEQUENCES_SYNC_PER_BATCH == 0 || currseq == list_length(sequences))
+			CommitTransactionCommand();
+	}
+
+	if (!run_as_owner)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if it's required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSeqeunces();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during table synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication Tablesync worker entry point */
+void
+SequencesyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(false);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b00267f042..a076412609 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -139,9 +139,9 @@ static StringInfo copybuf = NULL;
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(bool istable)
 {
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
@@ -157,10 +157,15 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (istable)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequences synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
@@ -387,7 +392,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(true);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -463,6 +468,17 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	foreach(lc, table_states_not_ready)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind == RELKIND_SEQUENCE)
+			continue;
 
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
@@ -660,6 +676,105 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence syncronization worker running already, no need to
+ * start a sequence synchronization in this case. The existing sequence
+ * sync worker will syncronize the sequences. If there are still any sequences
+ * to be synced after the sequence sync worker exited, then we new sequence
+ * sync worker can be started in the next iteration. To prevent starting the
+ * seqeuence sync worker at a high frequency after a failure, we store its last
+ * start time. We start the sync worker for the same relation after waiting
+ * at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	ListCell   *lc;
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription tables here. */
+	FetchTableStates(&started_tx);
+
+	/*
+	 * Start sequence sync worker if there is no sequence sync worker running.
+	 */
+	foreach(lc, table_states_not_ready)
+	{
+		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		LogicalRepWorker *syncworker;
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
+															true);
+		/*
+		 * If there is a sequence sync worker, the sequence sync worker
+		 * will handle sync of this sequence.
+		 */
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int	nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence sync
+			 * worker to sync the sequences.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											InvalidOid,
+											DSM_HANDLE_INVALID);
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -682,9 +797,16 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			process_syncing_sequences_for_apply();
 			process_syncing_tables_for_apply(current_lsn);
 			break;
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1320,7 +1442,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(true);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1716,7 +1838,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(true);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index b5a80fe3e8..1211de1e27 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,12 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4631,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequences synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4646,7 +4656,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index f1ee348909..a2e8bd9d44 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 0aa14ec4a2..8c96f0ce72 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -159,6 +159,12 @@ typedef struct Subscription
 								 * specified origin */
 } Subscription;
 
+typedef struct SubscriptionSeqInfo
+{
+	Oid			seqid;
+	XLogRecPtr	lsn;
+} SubscriptionSeqInfo;
+
 /* Disallow streaming in-progress transactions. */
 #define LOGICALREP_STREAM_OFF 'f'
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..3cf7834f8d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -91,5 +91,6 @@ extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionSequences(Oid subid, char state);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 62d1cf47e2..76485b2a60 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4215,6 +4215,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..f380c1ba60 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -19,6 +19,7 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
 extern void TablesyncWorkerMain(Datum main_arg);
+extern void SequencesyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 515aefd519..130391fb27 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -32,6 +32,7 @@ typedef enum LogicalRepWorkerType
 	WORKERTYPE_TABLESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
+	WORKERTYPE_SEQUENCESYNC,
 } LogicalRepWorkerType;
 
 typedef struct LogicalRepWorker
@@ -240,6 +241,8 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
+extern LogicalRepWorker *logicalrep_sequence_sync_worker_find(Oid subid,
+															  bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running);
 extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
@@ -252,6 +255,8 @@ extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(bool istable);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -325,10 +330,15 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+extern void copy_subscription_sequences(WalReceiverConn *conn, Oid subid,
+										List *sequences);
+
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
 #define isTablesyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequencesyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
@@ -336,6 +346,12 @@ am_tablesync_worker(void)
 	return isTablesyncWorker(MyLogicalRepWorker);
 }
 
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequencesyncWorker(MyLogicalRepWorker);
+}
+
 static inline bool
 am_leader_apply_worker(void)
 {
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..94bf83a14b
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,145 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+	CREATE SEQUENCE s2;
+	CREATE SEQUENCE s3;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION seq_pub FOR ALL SEQUENCES");
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
+);
+
+# Wait for initial sync to finish as well
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# create a new sequence, it should be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s2;
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# changes to existing sequences should not be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+# Refresh publication after create a new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# Changes of both new and existing sequence should be synced after REFRESH
+# PUBLICATION SEQUENCES.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s3;
+	INSERT INTO seq_test SELECT nextval('s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# Refresh publication sequences after create new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION SEQUENCES
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '231|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s3;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 61ad417cde..7e10347fe0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2768,6 +2768,7 @@ SubscriptingRefState
 Subscription
 SubscriptionInfo
 SubscriptionRelState
+SubscriptionSeqInfo
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
 SupportRequestIndexCondition
-- 
2.34.1

#53vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#52)
3 attachment(s)
Re: Logical Replication of sequences

On Wed, 19 Jun 2024 at 20:33, vignesh C <vignesh21@gmail.com> wrote:

On Tue, 18 Jun 2024 at 16:10, Amit Kapila <amit.kapila16@gmail.com> wrote:

Agreed and I am not sure which is better because there is a value in
keeping the state name the same for both sequences and tables. We
probably need more comments in code and doc updates to make the
behavior clear. We can start with the sequence state as 'init' for
'needs-to-be-sycned' and 'ready' for 'synced' and can change if others
feel so during the review.

Here is a patch which does the sequence synchronization in the
following lines from the above discussion:
This commit introduces sequence synchronization during 1) creation of
subscription for initial sync of sequences 2) refresh publication to
synchronize the sequences for the newly created sequences 3) refresh
publication sequences for synchronizing all the sequences.
1) During subscription creation with CREATE SUBSCRIPTION (no syntax change):
- The subscriber retrieves sequences associated with publications.
- Sequences are added in the 'init' state to the pg_subscription_rel table.
- Sequence synchronization worker will be started if there are any
sequences to be synchronized
- A new sequence synchronization worker handles synchronization in
batches of 100 sequences:
a) Retrieves sequence values using pg_sequence_state from the publisher.
b) Sets sequence values accordingly.
c) Updates sequence state to 'READY' in pg_susbcripion_rel
d) Commits batches of 100 synchronized sequences.
2) Refreshing sequences with ALTER SUBSCRIPTION ... REFRESH
PUBLICATION (no syntax change):
- Stale sequences are removed from pg_subscription_rel.
- Newly added sequences in the publisher are added in 'init' state
to pg_subscription_rel.
- Sequence synchronization will be done by sequence sync worker as
listed in subscription creation process.
- Sequence synchronization occurs for newly added sequences only.
3) Introduce new command ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES for refreshing all sequences:
- Removes stale sequences and adds newly added sequences from the
publisher to pg_subscription_rel.
- Resets all sequences in pg_subscription_rel to 'init' state.
- Initiates sequence synchronization for all sequences by sequence
sync worker as listed in subscription creation process.

Here is an updated patch with a few fixes to remove an unused
function, changed a few references of table to sequence and added one
CHECK_FOR_INTERRUPTS in the sequence sync worker loop.

Regards,
Vignesh

Attachments:

v20240620-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240620-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From c46c4dd2e7f65e47209f8dbef594498b0e895e89 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240620 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications. This improvement facilitates seamless
synchronization of sequence data during operations such as
CREATE SUBSCRIPTION, REFRESH PUBLICATION, and REFRESH PUBLICATION SEQUENCES.

Additionally, a new system view, pg_publication_sequences, has been
introduced to list all sequences added to a publication. Furthermore,
enhancements to psql commands (\d and \dRp) now allow for better display
of publications containing specific sequences or sequences included in a
publication.
---
 doc/src/sgml/ref/create_publication.sgml  |  23 +-
 doc/src/sgml/system-views.sgml            |  67 ++++
 src/backend/catalog/pg_publication.c      |  86 ++++-
 src/backend/catalog/system_views.sql      |  10 +
 src/backend/commands/publicationcmds.c    |  54 ++-
 src/backend/parser/gram.y                 |  22 +-
 src/bin/pg_dump/pg_dump.c                 |  21 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  11 +
 src/bin/psql/describe.c                   | 218 ++++++++---
 src/bin/psql/tab-complete.c               |   4 +-
 src/include/catalog/pg_proc.dat           |   5 +
 src/include/catalog/pg_publication.h      |   9 +
 src/include/nodes/parsenodes.h            |   6 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 449 ++++++++++++----------
 src/test/regress/expected/rules.out       |   8 +
 src/test/regress/sql/publication.sql      |  15 +
 18 files changed, 712 insertions(+), 303 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..107ce63b2b 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -119,10 +124,11 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
    <varlistentry id="sql-createpublication-params-for-all-tables">
     <term><literal>FOR ALL TABLES</literal></term>
+    <term><literal>FOR ALL SEQUENCES</literal></term>
     <listitem>
      <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
+      Marks the publication as one that replicates changes for all tables or
+      sequences in the database, including tables created in the future.
      </para>
     </listitem>
    </varlistentry>
@@ -240,10 +246,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR ALL SEQUENCES</literal>,or <literal>FOR TABLES IN SCHEMA</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,7 +265,8 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR ALL SEQUENCES</command>, and
    <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
    user to be a superuser.
   </para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 8c18bea902..7491d50dc4 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2138,6 +2143,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..175caf23d0 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
@@ -1254,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index efb29adeb3..1057946dc0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -398,6 +398,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..475256f9e4 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -727,6 +727,7 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 ObjectAddress
 CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 {
+	ListCell   *lc;
 	Relation	rel;
 	ObjectAddress myself;
 	Oid			puboid;
@@ -741,6 +742,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	bool		for_all_tables = false;
+	bool		for_all_sequences = false;
+
+	/*
+	 * Translate the list of object types (represented by strings) to bool
+	 * flags.
+	 */
+	foreach(lc, stmt->for_all_objects)
+	{
+		char	   *val = strVal(lfirst(lc));
+
+		if (strcmp(val, "tables") == 0)
+			for_all_tables = true;
+		else if (strcmp(val, "sequences") == 0)
+			for_all_sequences = true;
+	}
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
@@ -748,11 +766,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 					   get_database_name(MyDatabaseId));
 
 	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	if (for_all_tables && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 				 errmsg("must be superuser to create FOR ALL TABLES publication")));
 
+	/* FOR ALL SEQUENCES requires superuser */
+	if (for_all_sequences && !superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("must be superuser to create FOR ALL SEQUENCES publication")));
+
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
 	/* Check if name is used */
@@ -782,7 +806,9 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+		BoolGetDatum(for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,12 +834,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (for_all_tables || for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+
+	/*
+	 * If the publication might have either tables or sequences (directly or
+	 * through a schema), process that.
+	 */
+	if (!for_all_tables || !for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1008,7 +1039,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1525,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1749,7 +1780,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, rels)
 	{
@@ -1828,7 +1859,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, schemas)
 	{
@@ -1919,6 +1950,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 							NameStr(form->pubname)),
 					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
 
+		if (form->puballsequences && !superuser_arg(newOwnerId))
+			ereport(ERROR,
+					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					 errmsg("permission denied to change owner of publication \"%s\"",
+							NameStr(form->pubname)),
+					 errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 4d582950b7..d99285dfa3 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -455,7 +455,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <node>	pub_obj_type
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10557,6 +10558,8 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION FOR ALL TABLES [WITH options]
  *
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
+ *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
  * pub_obj is one of:
@@ -10575,13 +10578,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
 					n->options = $7;
-					n->for_all_tables = true;
+					n->for_all_objects = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10696,19 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+pub_obj_type:	TABLES
+					{ $$ = (Node *) makeString("tables"); }
+				| SEQUENCES
+					{ $$ = (Node *) makeString("sequences"); }
+	;
+
+pub_obj_type_list:	pub_obj_type
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' pub_obj_type
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e324070828..953b9128ad 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4185,6 +4185,7 @@ getPublications(Archive *fout, int *numPublications)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4204,23 +4205,29 @@ getPublications(Archive *fout, int *numPublications)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 170000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4232,6 +4239,7 @@ getPublications(Archive *fout, int *numPublications)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4251,6 +4259,8 @@ getPublications(Archive *fout, int *numPublications)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4304,6 +4314,9 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
 
+	if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
 	{
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 865823868f..976115ab3d 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..ace1d1b661 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,17 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index f67bf0b892..482c756b3a 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,52 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* print any publications */
+		if (pset.sversion >= 170000)
+		{
+			int			tuples = 0;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+			else
+				tuples = PQntuples(result);
+
+			if (tuples > 0)
+				printTableAddFooter(&cont, _("Publications:"));
+
+			/* Might be an empty set - that's ok */
+			for (i = 0; i < tuples; i++)
+			{
+				printfPQExpBuffer(&buf, "    \"%s\"",
+								  PQgetvalue(result, i, 0));
+
+				printTableAddFooter(&cont, buf.data);
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2110,11 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6219,7 +6281,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6233,19 +6295,37 @@ listPublications(const char *pattern)
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT pubname AS \"%s\",\n"
-					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
-					  gettext_noop("Name"),
-					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
-					  gettext_noop("Inserts"),
-					  gettext_noop("Updates"),
-					  gettext_noop("Deletes"));
+	if (pset.sversion >= 170000)
+		printfPQExpBuffer(&buf,
+						  "SELECT pubname AS \"%s\",\n"
+						  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+						  "  puballtables AS \"%s\",\n"
+						  "  puballsequences AS \"%s\",\n"
+						  "  pubinsert AS \"%s\",\n"
+						  "  pubupdate AS \"%s\",\n"
+						  "  pubdelete AS \"%s\"",
+						  gettext_noop("Name"),
+						  gettext_noop("Owner"),
+						  gettext_noop("All tables"),
+						  gettext_noop("All sequences"),
+						  gettext_noop("Inserts"),
+						  gettext_noop("Updates"),
+						  gettext_noop("Deletes"));
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT pubname AS \"%s\",\n"
+						  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+						  "  puballtables AS \"%s\",\n"
+						  "  pubinsert AS \"%s\",\n"
+						  "  pubupdate AS \"%s\",\n"
+						  "  pubdelete AS \"%s\"",
+						  gettext_noop("Name"),
+						  gettext_noop("Owner"),
+						  gettext_noop("All tables"),
+						  gettext_noop("Inserts"),
+						  gettext_noop("Updates"),
+						  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6343,6 +6423,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6359,6 +6440,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 170000);
 
 	initPQExpBuffer(&buf);
 
@@ -6372,6 +6454,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6423,6 +6509,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols ++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6430,6 +6518,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6440,6 +6530,10 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
+
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index d453e224d9..f1ee348909 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,9 +3159,9 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
 		COMPLETE_WITH("TABLES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 990ef2f836..8e68adb01f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11945,6 +11945,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..8cc0fe9d0d 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -148,6 +155,8 @@ extern List *GetPubPartitionOptionRelations(List *result,
 extern Oid	GetTopMostAncestorInPublication(Oid puboid, List *ancestors,
 											int *ancestor_level);
 
+extern  List *GetAllSequencesPublicationRelations(void);
+
 extern bool is_publishable_relation(Relation rel);
 extern bool is_schema_publication(Oid pubid);
 extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *pri,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..62d1cf47e2 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4168,7 +4168,8 @@ typedef struct CreatePublicationStmt
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
@@ -4191,7 +4192,8 @@ typedef struct AlterPublicationStmt
 	 * objects.
 	 */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 	AlterPublicationAction action;	/* What action to perform with the given
 									 * objects */
 } AlterPublicationStmt;
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..d54008ae6f 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,53 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+         pubname         | puballtables | puballsequences 
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f            | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+                       Sequence "pub_test.testpub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_forallsequences"
+
+\dRp+ testpub_forallsequences
+                                    Publication testpub_forallsequences
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +272,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +290,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +322,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +338,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +357,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +368,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +404,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +417,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +535,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +752,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +939,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1147,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1188,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1269,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1282,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1311,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1337,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1408,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1419,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1440,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1452,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1464,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1475,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1486,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1497,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1528,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1540,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1622,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1643,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 13178e2b3d..1c646f15a1 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1441,6 +1441,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..e9fc959c8b 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,21 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
+
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
-- 
2.34.1

v20240620-0001-Introduce-pg_sequence_state-and-SetSequenc.patchtext/x-patch; charset=US-ASCII; name=v20240620-0001-Introduce-pg_sequence_state-and-SetSequenc.patchDownload
From c61e0f981487b06dc369c7ce5b285836e6d5f135 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240620 1/3] Introduce pg_sequence_state and SetSequence
 functions for enhanced sequence management

This patch introduces new functionalities to PostgreSQL:
- pg_sequence_state allows retrieval of sequence values using LSN.
- SetSequence enables updating sequences with user-specified values.
---
 src/backend/commands/sequence.c | 161 ++++++++++++++++++++++++++++++--
 src/include/catalog/pg_proc.dat |   8 ++
 src/include/commands/sequence.h |   1 +
 3 files changed, 162 insertions(+), 8 deletions(-)

diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 28f8522264..57453a7356 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,80 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ */
+void
+SetSequence(Oid seq_relid, int64 value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page' buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* check the comment above nextval_internal()'s equivalent call. */
+	if (RelationNeedsWAL(seqrel))
+	{
+		GetTopTransactionId();
+
+		if (XLogLogicalInfoActive())
+			GetCurrentTransactionId();
+	}
+
+	/* ready to change the on-disk (or really, in-buffer) tuple */
+	START_CRIT_SECTION();
+
+	seq->last_value = value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	MarkBufferDirty(buf);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(seqrel))
+	{
+		xl_seq_rec      xlrec;
+		XLogRecPtr      recptr;
+		Page            page = BufferGetPage(buf);
+
+		XLogBeginInsert();
+		XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+		xlrec.locator = seqrel->rd_locator;
+
+		XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+		XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+		/* allow filtering by origin on a sequence update */
+		XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
+
+		recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buf);
+
+	/* Clear local cache so that we don't think we have cached numbers */
+	/* Note that we do not change the currval() state */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +552,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -551,7 +627,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -680,7 +756,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -976,7 +1052,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1180,7 +1256,8 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn)
 {
 	Page		page;
 	ItemId		lp;
@@ -1197,6 +1274,13 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/*
+	 * If the caller requested it, set the page LSN. This allows deciding
+	 * which sequence changes are before/after the returned sequence state.
+	 */
+	if (lsn)
+		*lsn = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1804,7 +1888,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1819,6 +1903,67 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+	XLogRecPtr	lsn;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4];
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	is_called = seq->is_called;
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	values[0] = LSNGetDatum(lsn);
+	values[1] = Int64GetDatum(last_value);
+	values[2] = Int64GetDatum(log_cnt);
+	values[3] = BoolGetDatum(is_called);
+
+	memset(nulls, 0, sizeof(nulls));
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 6a5476d3c4..990ef2f836 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..fad731a733 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid seq_relid, int64 value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
-- 
2.34.1

v20240620-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240620-0003-Enhance-sequence-synchronization-during-su.patchDownload
From 62bdb4f5784d50eb28d5484783f5f2728a923300 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 19 Jun 2024 14:58:14 +0530
Subject: [PATCH v20240620 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
	ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  11 +
 src/backend/catalog/pg_subscription.c         |  64 ++++
 src/backend/commands/subscriptioncmds.c       | 271 ++++++++++++++-
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |   9 +
 src/backend/postmaster/bgworker.c             |   3 +
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  51 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 312 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 145 +++++++-
 src/backend/replication/logical/worker.c      |  12 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_subscription.h         |   6 +
 src/include/catalog/pg_subscription_rel.h     |   1 +
 src/include/nodes/parsenodes.h                |   1 +
 src/include/replication/logicalworker.h       |   1 +
 src/include/replication/worker_internal.h     |  13 +
 src/test/subscription/t/034_sequences.pl      | 145 ++++++++
 src/tools/pgindent/typedefs.list              |   1 +
 23 files changed, 1035 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 698169afdb..677abb57f2 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5200,8 +5200,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 746d5bd330..5d9d6f3e50 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1984,8 +1984,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the table synchronization workers, sequence
+    synchronization worker and parallel apply workers.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index b2ad9b446f..5f0170272f 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2009,8 +2009,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 476f195622..fc8a33c0b5 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -27,6 +27,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SKIP ( <replaceable class="parameter">skip_option</replaceable> = <replaceable class="parameter">value</replaceable> )
@@ -194,6 +195,16 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequences information from publisher and re-synchronize the
+      sequence data from the publisher.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..7673f1384c 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -551,3 +552,66 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 	return res;
 }
+
+
+/*
+ * Get the sequences for the subscription.
+ *
+ * The returned list is palloc'ed in the current memory context.
+ */
+List *
+GetSubscriptionSequences(Oid subid, char state)
+{
+	List	   *res = NIL;
+	Relation	rel;
+	HeapTuple	tup;
+	int			nkeys = 0;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	MemoryContext oldctx;
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[nkeys++],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	if (state != '\0')
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(state));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, nkeys, skey);
+
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subseq;
+		SubscriptionRelState *seqinfo;
+		Datum		d;
+		bool		isnull;
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		subseq = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		seqinfo = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
+		seqinfo->relid = subseq->srrelid;
+		d = SysCacheGetAttr(SUBSCRIPTIONRELMAP, tup,
+							Anum_pg_subscription_rel_srsublsn, &isnull);
+		if (isnull)
+			seqinfo->lsn = InvalidXLogRecPtr;
+		else
+			seqinfo->lsn = DatumGetLSN(d);
+
+		res = lappend(res, seqinfo);
+		MemoryContextSwitchTo(oldctx);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	return res;
+}
\ No newline at end of file
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e407428dbc..9821b48ce6 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -102,6 +102,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -759,6 +760,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List *sequences;
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -769,6 +771,23 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 */
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach(lc, sequences)
+			{
+				RangeVar   *rv = (RangeVar *) lfirst(lc);
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * Get the table list from publisher and build local table status
 			 * info.
@@ -898,6 +917,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		/* Get the table list from publisher. */
 		pubrel_names = fetch_table_list(wrconn, sub->publications);
 
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names, fetch_sequence_list(wrconn, sub->publications));
+
 		/* Get local table list. */
 		subrel_states = GetSubscriptionRelations(sub->oid, false);
 		subrel_count = list_length(subrel_states);
@@ -980,6 +1002,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1006,13 +1029,15 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				/* Stop the worker if relation kind is not sequence*/
+				if (relkind != RELKIND_SEQUENCE)
+					logicalrep_worker_stop(sub->oid, relid);
 
 				/*
 				 * For READY state, we would have already dropped the
 				 * tablesync origin.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)
 				{
 					char		originname[NAMEDATALEN];
 
@@ -1047,7 +1072,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		for (off = 0; off < remove_rel_len; off++)
 		{
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE &&
+				get_rel_relkind(sub_remove_rels[off].relid) != RELKIND_SEQUENCE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1077,6 +1103,148 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Refresh the sequences data of the subscription.
+ */
+static void
+AlterSubscription_refreshsequences(Subscription *sub)
+{
+	char	   *err;
+	List	   *pubseq_names = NIL;
+	List	   *subseq_states;
+	Oid		   *subseq_local_oids;
+	Oid		   *pubseq_local_oids;
+	ListCell   *lc;
+	int			off;
+	int			subrel_count;
+	Relation	rel = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	PG_TRY();
+	{
+		/* Get the sequences from the publisher. */
+		pubseq_names = fetch_sequence_list(wrconn, sub->publications);
+
+		/* Get local sequence list. */
+		subseq_states = GetSubscriptionSequences(sub->oid, '\0');
+		subrel_count = list_length(subseq_states);
+
+		/*
+		 * Build qsorted array of local sequence oids for faster lookup. This
+		 * can potentially contain all seqeunces in the database so speed of
+		 * lookup is important.
+		 */
+		subseq_local_oids = palloc(subrel_count * sizeof(Oid));
+		off = 0;
+		foreach(lc, subseq_states)
+		{
+			SubscriptionSeqInfo *seqinfo = (SubscriptionSeqInfo *) lfirst(lc);
+
+			subseq_local_oids[off++] = seqinfo->seqid;
+		}
+
+		qsort(subseq_local_oids, subrel_count, sizeof(Oid), oid_cmp);
+
+		/*
+		 * Walk over the remote sequences and try to match them to locally
+		 * known sequences. If the sequence is not known locally create a new
+		 * state for it.
+		 *
+		 * Also builds array of local oids of remote sequences for the next
+		 * step.
+		 */
+		off = 0;
+		pubseq_local_oids = palloc(list_length(pubseq_names) * sizeof(Oid));
+
+		foreach(lc, pubseq_names)
+		{
+			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			Oid			relid;
+
+			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+			/* Check for supported relkind. */
+			CheckSubscriptionRelkind(get_rel_relkind(relid),
+									 rv->schemaname, rv->relname);
+
+			pubseq_local_oids[off++] = relid;
+
+			if (!bsearch(&relid, subseq_local_oids,
+						 subrel_count, sizeof(Oid), oid_cmp))
+			{
+				AddSubscriptionRelState(sub->oid, relid,
+										SUBREL_STATE_INIT,
+										InvalidXLogRecPtr, true);
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" added to subscription \"%s\"",
+										 rv->schemaname, rv->relname, sub->name)));
+			}
+		}
+
+		/*
+		 * Next remove state for sequences we should not care about anymore
+		 * using the data we collected above
+		 */
+		qsort(pubseq_local_oids, list_length(pubseq_names),
+			  sizeof(Oid), oid_cmp);
+
+		for (off = 0; off < subrel_count; off++)
+		{
+			Oid			relid = subseq_local_oids[off];
+
+			if (!bsearch(&relid, pubseq_local_oids,
+						 list_length(pubseq_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * This locking ensures that the state of rels won't change
+				 * till we are done with this refresh operation.
+				 */
+				if (!rel)
+					rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+				RemoveSubscriptionRel(sub->oid, relid);
+
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+			}
+			else
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
+			}
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+
+	if (rel)
+		table_close(rel, NoLock);
+}
+
 /*
  * Alter the existing subscription.
  */
@@ -1404,6 +1572,20 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refreshsequences(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_REFRESH:
 			{
 				if (!sub->enabled)
@@ -2060,11 +2242,17 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
+		char	   *schemaname;
+		char	   *tablename;
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			schemaname = get_namespace_name(get_rel_namespace(relid));
+			tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2234,6 +2422,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	ListCell   *lc;
+	bool		first;
+	List	   *tablelist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "  FROM pg_catalog.pg_publication_sequences s\n"
+						   " WHERE s.pubname IN (");
+	first = true;
+	foreach(lc, publications)
+	{
+		char	   *pubname = strVal(lfirst(lc));
+
+		if (first)
+			first = false;
+		else
+			appendStringInfoString(&cmd, ", ");
+
+		appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+	}
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of replicated sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		tablelist = lappend(tablelist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return tablelist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index d99285dfa3..78acd3a0d2 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10823,6 +10823,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b37ccb85ad..ac68e5a609 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -132,6 +132,9 @@ static const struct
 	},
 	{
 		"TablesyncWorkerMain", TablesyncWorkerMain
+	},
+	{
+		"SequencesyncWorkerMain", SequencesyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 27c3a91fb7..a2c1039a9f 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -267,6 +267,39 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 	return res;
 }
 
+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
+{
+	int			i;
+	LogicalRepWorker *res = NULL;
+
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	/* Search for attached worker for a given subscription id. */
+	for (i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		/* Skip parallel apply workers. */
+		if (!isSequencesyncWorker(w))
+			continue;
+
+		if (w->in_use && w->subid == subid && (only_running && w->proc))
+		{
+			res = w;
+			break;
+		}
+	}
+
+	return res;
+}
+
 /*
  * Similar to logicalrep_worker_find(), but returns a list of all workers for
  * the subscription, instead of just one.
@@ -311,6 +344,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -320,7 +354,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - parallel apply worker is the only kind of subworker
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
-	Assert(is_tablesync_worker == OidIsValid(relid));
+	Assert(is_tablesync_worker == OidIsValid(relid) || is_sequencesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
 
 	ereport(DEBUG1,
@@ -396,7 +430,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -491,6 +526,15 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequencesyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u sync %u",
+					 subid,
+					 relid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -1351,6 +1395,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..8d13ef24d8
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,312 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/lsyscache.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * Fetch sequence data (current state) from the remote node, including the
+ * page LSN.
+ */
+static int64
+fetch_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {INT8OID, LSNOID};
+	int64		value = (Datum) 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT (last_value + log_cnt), page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		bool		isnull;
+
+		value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		*lsn = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char			relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for table \"%s.%s\" from publisher: %s",
+						nspname, RelationGetRelationName(rel), res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	/*
+	 * Logical replication of sequences is based on decoding WAL records,
+	 * describing the "next" state of the sequence the current state in the
+	 * relfilenode is yet to reach. But during the initial sync we read the
+	 * current state, so we need to reconstruct the WAL record logged when we
+	 * started the current batch of sequence values.
+	 *
+	 * Otherwise we might get duplicate values (on subscriber) if we failed
+	 * over right after the sync.
+	 */
+	sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
+
+	/* sets the sequences in non-transactional way */
+	SetSequence(RelationGetRelid(rel), sequence_value);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSeqeunces()
+{
+	char	   *err;
+	bool		must_use_password;
+	List *sequences;
+	char	   slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner;
+	ListCell *lc;
+	int 		currseq = 0;
+	Oid			subid = MyLogicalRepWorker->subid;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	/* Get the sequences that should be synchronized. */
+	StartTransactionCommand();
+	sequences = GetSubscriptionSequences(subid,
+										 SUBREL_STATE_INIT);
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+
+	foreach(lc, sequences)
+	{
+		SubscriptionRelState *seqinfo = (SubscriptionRelState *) lfirst(lc);
+		Relation	sequencerel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (currseq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)
+			StartTransactionCommand();
+
+		sequencerel = table_open(seqinfo->relid, RowExclusiveLock);\
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless the
+		 * user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequencerel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequence sync worker has permission to insert into the
+		 * target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequencerel), GetUserId(),
+									ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						get_relkind_objtype(sequencerel->rd_rel->relkind),
+						RelationGetRelationName(sequencerel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or superuser,
+		 * who has it implicitly), but other roles should not be able to
+		 * circumvent RLS.  Disallow logical replication into RLS enabled
+		 * relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into relation with row-level security enabled: \"%s\"",
+							GetUserNameFromId(GetUserId(), true),
+							RelationGetRelationName(sequencerel))));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequencerel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+		ereport(LOG,
+				errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+					   get_subscription_name(subid, false), RelationGetRelationName(sequencerel)));
+		table_close(sequencerel, NoLock);
+
+		currseq++;
+
+		if (currseq % MAX_SEQUENCES_SYNC_PER_BATCH == 0 || currseq == list_length(sequences))
+			CommitTransactionCommand();
+	}
+
+	if (!run_as_owner)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if it's required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSeqeunces();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication Sequencesync worker entry point */
+void
+SequencesyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(false);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b00267f042..de070bd120 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -139,9 +139,9 @@ static StringInfo copybuf = NULL;
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(bool istable)
 {
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
@@ -157,10 +157,15 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (istable)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequences synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
@@ -387,7 +392,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(true);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -463,6 +468,17 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	foreach(lc, table_states_not_ready)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind == RELKIND_SEQUENCE)
+			continue;
 
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
@@ -477,11 +493,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -660,6 +671,105 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence syncronization worker running already, no need to
+ * start a sequence synchronization in this case. The existing sequence
+ * sync worker will syncronize the sequences. If there are still any sequences
+ * to be synced after the sequence sync worker exited, then we new sequence
+ * sync worker can be started in the next iteration. To prevent starting the
+ * seqeuence sync worker at a high frequency after a failure, we store its last
+ * start time. We start the sync worker for the same relation after waiting
+ * at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	ListCell   *lc;
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription tables here. */
+	FetchTableStates(&started_tx);
+
+	/*
+	 * Start sequence sync worker if there is no sequence sync worker running.
+	 */
+	foreach(lc, table_states_not_ready)
+	{
+		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		LogicalRepWorker *syncworker;
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
+															true);
+		/*
+		 * If there is a sequence sync worker, the sequence sync worker
+		 * will handle sync of this sequence.
+		 */
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int	nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence sync
+			 * worker to sync the sequences.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											InvalidOid,
+											DSM_HANDLE_INVALID);
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -682,9 +792,16 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			process_syncing_sequences_for_apply();
 			process_syncing_tables_for_apply(current_lsn);
 			break;
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1320,7 +1437,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(true);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1716,7 +1833,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(true);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index b5a80fe3e8..1211de1e27 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,12 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4631,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequences synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4646,7 +4656,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index f1ee348909..a2e8bd9d44 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 0aa14ec4a2..8c96f0ce72 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -159,6 +159,12 @@ typedef struct Subscription
 								 * specified origin */
 } Subscription;
 
+typedef struct SubscriptionSeqInfo
+{
+	Oid			seqid;
+	XLogRecPtr	lsn;
+} SubscriptionSeqInfo;
+
 /* Disallow streaming in-progress transactions. */
 #define LOGICALREP_STREAM_OFF 'f'
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..3cf7834f8d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -91,5 +91,6 @@ extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionSequences(Oid subid, char state);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 62d1cf47e2..76485b2a60 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4215,6 +4215,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..f380c1ba60 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -19,6 +19,7 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
 extern void TablesyncWorkerMain(Datum main_arg);
+extern void SequencesyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 515aefd519..23b4267598 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -32,6 +32,7 @@ typedef enum LogicalRepWorkerType
 	WORKERTYPE_TABLESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
+	WORKERTYPE_SEQUENCESYNC,
 } LogicalRepWorkerType;
 
 typedef struct LogicalRepWorker
@@ -240,6 +241,8 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
+extern LogicalRepWorker *logicalrep_sequence_sync_worker_find(Oid subid,
+															  bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running);
 extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
@@ -252,6 +255,8 @@ extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(bool istable);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -329,6 +334,8 @@ extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
 #define isTablesyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequencesyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
@@ -336,6 +343,12 @@ am_tablesync_worker(void)
 	return isTablesyncWorker(MyLogicalRepWorker);
 }
 
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequencesyncWorker(MyLogicalRepWorker);
+}
+
 static inline bool
 am_leader_apply_worker(void)
 {
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..94bf83a14b
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,145 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+	CREATE SEQUENCE s2;
+	CREATE SEQUENCE s3;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION seq_pub FOR ALL SEQUENCES");
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
+);
+
+# Wait for initial sync to finish as well
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# create a new sequence, it should be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s2;
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# changes to existing sequences should not be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+# Refresh publication after create a new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# Changes of both new and existing sequence should be synced after REFRESH
+# PUBLICATION SEQUENCES.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s3;
+	INSERT INTO seq_test SELECT nextval('s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# Refresh publication sequences after create new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION SEQUENCES
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '231|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s3;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 61ad417cde..7e10347fe0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2768,6 +2768,7 @@ SubscriptingRefState
 Subscription
 SubscriptionInfo
 SubscriptionRelState
+SubscriptionSeqInfo
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
 SupportRequestIndexCondition
-- 
2.34.1

#54Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#52)
Re: Logical Replication of sequences

On Wed, Jun 19, 2024 at 8:33 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 18 Jun 2024 at 16:10, Amit Kapila <amit.kapila16@gmail.com> wrote:

Agreed and I am not sure which is better because there is a value in
keeping the state name the same for both sequences and tables. We
probably need more comments in code and doc updates to make the
behavior clear. We can start with the sequence state as 'init' for
'needs-to-be-sycned' and 'ready' for 'synced' and can change if others
feel so during the review.

Here is a patch which does the sequence synchronization in the
following lines from the above discussion:

Thanks for summarizing the points discussed. I would like to confirm
whether the patch replicates new sequences that are created
implicitly/explicitly for a publication defined as ALL SEQUENCES.

--
With Regards,
Amit Kapila.

#55vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#54)
Re: Logical Replication of sequences

On Thu, 20 Jun 2024 at 18:45, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 19, 2024 at 8:33 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 18 Jun 2024 at 16:10, Amit Kapila <amit.kapila16@gmail.com> wrote:

Agreed and I am not sure which is better because there is a value in
keeping the state name the same for both sequences and tables. We
probably need more comments in code and doc updates to make the
behavior clear. We can start with the sequence state as 'init' for
'needs-to-be-sycned' and 'ready' for 'synced' and can change if others
feel so during the review.

Here is a patch which does the sequence synchronization in the
following lines from the above discussion:

Thanks for summarizing the points discussed. I would like to confirm
whether the patch replicates new sequences that are created
implicitly/explicitly for a publication defined as ALL SEQUENCES.

Currently, FOR ALL SEQUENCES publication both explicitly created
sequences and implicitly created sequences will be synchronized during
the creation of subscriptions (using CREATE SUBSCRIPTION) and
refreshing publication sequences(using ALTER SUBSCRIPTION ... REFRESH
PUBLICATION SEQUENCES).
Therefore, the explicitly created sequence seq1:
CREATE SEQUENCE seq1;
and the implicitly created sequence seq_test2_c2_seq for seq_test2 table:
CREATE TABLE seq_test2 (c1 int, c2 SERIAL);
will both be synchronized.

Regards,
Vignesh

#56Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: vignesh C (#53)
Re: Logical Replication of sequences

On Thu, 20 Jun 2024 at 18:24, vignesh C <vignesh21@gmail.com> wrote:

On Wed, 19 Jun 2024 at 20:33, vignesh C <vignesh21@gmail.com> wrote:

On Tue, 18 Jun 2024 at 16:10, Amit Kapila <amit.kapila16@gmail.com> wrote:

Agreed and I am not sure which is better because there is a value in
keeping the state name the same for both sequences and tables. We
probably need more comments in code and doc updates to make the
behavior clear. We can start with the sequence state as 'init' for
'needs-to-be-sycned' and 'ready' for 'synced' and can change if others
feel so during the review.

Here is a patch which does the sequence synchronization in the
following lines from the above discussion:
This commit introduces sequence synchronization during 1) creation of
subscription for initial sync of sequences 2) refresh publication to
synchronize the sequences for the newly created sequences 3) refresh
publication sequences for synchronizing all the sequences.
1) During subscription creation with CREATE SUBSCRIPTION (no syntax change):
- The subscriber retrieves sequences associated with publications.
- Sequences are added in the 'init' state to the pg_subscription_rel table.
- Sequence synchronization worker will be started if there are any
sequences to be synchronized
- A new sequence synchronization worker handles synchronization in
batches of 100 sequences:
a) Retrieves sequence values using pg_sequence_state from the publisher.
b) Sets sequence values accordingly.
c) Updates sequence state to 'READY' in pg_susbcripion_rel
d) Commits batches of 100 synchronized sequences.
2) Refreshing sequences with ALTER SUBSCRIPTION ... REFRESH
PUBLICATION (no syntax change):
- Stale sequences are removed from pg_subscription_rel.
- Newly added sequences in the publisher are added in 'init' state
to pg_subscription_rel.
- Sequence synchronization will be done by sequence sync worker as
listed in subscription creation process.
- Sequence synchronization occurs for newly added sequences only.
3) Introduce new command ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES for refreshing all sequences:
- Removes stale sequences and adds newly added sequences from the
publisher to pg_subscription_rel.
- Resets all sequences in pg_subscription_rel to 'init' state.
- Initiates sequence synchronization for all sequences by sequence
sync worker as listed in subscription creation process.

Here is an updated patch with a few fixes to remove an unused
function, changed a few references of table to sequence and added one
CHECK_FOR_INTERRUPTS in the sequence sync worker loop.

Hi Vignesh,

I have reviewed the patches and I have following comments:

===== tablesync.c ======
1. process_syncing_sequences_for_apply can crash with:
2024-06-21 15:25:17.208 IST [3681269] LOG: logical replication apply
worker for subscription "test1" has started
2024-06-21 15:28:10.127 IST [3682329] LOG: logical replication
sequences synchronization worker for subscription "test1" has started
2024-06-21 15:28:10.146 IST [3682329] LOG: logical replication
synchronization for subscription "test1", sequence "s1" has finished
2024-06-21 15:28:10.149 IST [3682329] LOG: logical replication
synchronization for subscription "test1", sequence "s2" has finished
2024-06-21 15:28:10.149 IST [3682329] LOG: logical replication
sequences synchronization worker for subscription "test1" has finished
2024-06-21 15:29:53.535 IST [3682767] LOG: logical replication
sequences synchronization worker for subscription "test1" has started
TRAP: failed Assert("nestLevel > 0 && (nestLevel <= GUCNestLevel ||
(nestLevel == GUCNestLevel + 1 && !isCommit))"), File: "guc.c", Line:
2273, PID: 3682767
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (ExceptionalCondition+0xbb)[0x5b2a61861c99]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (AtEOXact_GUC+0x7b)[0x5b2a618bddfa]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (RestoreUserContext+0xc7)[0x5b2a618a6937]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1ff7dfa)[0x5b2a61115dfa]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1ff7eb4)[0x5b2a61115eb4]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (SequencesyncWorkerMain+0x33)[0x5b2a61115fe7]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (BackgroundWorkerMain+0x4ad)[0x5b2a61029cae]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (postmaster_child_launch+0x236)[0x5b2a6102fb36]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1f1d12a)[0x5b2a6103b12a]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1f1df0f)[0x5b2a6103bf0f]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1f1bf71)[0x5b2a61039f71]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1f16f73)[0x5b2a61034f73]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (PostmasterMain+0x18fb)[0x5b2a61034445]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1ab1ab8)[0x5b2a60bcfab8]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7b76bc629d90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7b76bc629e40]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (_start+0x25)[0x5b2a601491a5]

Analysis:
Suppose there are two sequences (s1, s2) on publisher.
SO, during initial sync.
in loop,
+ foreach(lc, table_states_not_ready)

table_states_not_ready -> it contains both s1 and s2.
So, for s1 a sequence sync will be started. It will sync all sequences
and the sequence sync worker will exit.
Now, for s2 again a sequence sync will start. It will give the above error.

Is this loop required? Instead we can just use a bool like
'is_any_sequence_not_ready'. Thoughts?

===== sequencesync.c =====
2. function name should be 'LogicalRepSyncSequences' instead of
'LogicalRepSyncSeqeunces'

3. In function 'LogicalRepSyncSeqeunces'
sequencerel = table_open(seqinfo->relid, RowExclusiveLock);\
There is a extra '\' symbol

4. In function LogicalRepSyncSeqeunces:
+ ereport(LOG,
+ errmsg("logical replication synchronization for subscription \"%s\",
sequence \"%s\" has finished",
+   get_subscription_name(subid, false), RelationGetRelationName(sequencerel)));
+ table_close(sequencerel, NoLock);
+
+ currseq++;
+
+ if (currseq % MAX_SEQUENCES_SYNC_PER_BATCH == 0 || currseq ==
list_length(sequences))
+ CommitTransactionCommand();

The above message gets logged even if the changes are not committed.
Suppose the sequence worker exits before commit due to some reason.
Thought the log will show that sequence is synced, the sequence will
be in 'init' state. I think this is not desirable.
Maybe we should log the synced sequences at commit time? Thoughts?

===== General ====
5. We can use other macros like 'foreach_ptr' instead of 'foreach'

Thanks and Regards,
Shlok Kyal

#57vignesh C
vignesh21@gmail.com
In reply to: Shlok Kyal (#56)
3 attachment(s)
Re: Logical Replication of sequences

On Tue, 25 Jun 2024 at 17:53, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

On Thu, 20 Jun 2024 at 18:24, vignesh C <vignesh21@gmail.com> wrote:

On Wed, 19 Jun 2024 at 20:33, vignesh C <vignesh21@gmail.com> wrote:

On Tue, 18 Jun 2024 at 16:10, Amit Kapila <amit.kapila16@gmail.com> wrote:

Agreed and I am not sure which is better because there is a value in
keeping the state name the same for both sequences and tables. We
probably need more comments in code and doc updates to make the
behavior clear. We can start with the sequence state as 'init' for
'needs-to-be-sycned' and 'ready' for 'synced' and can change if others
feel so during the review.

Here is a patch which does the sequence synchronization in the
following lines from the above discussion:
This commit introduces sequence synchronization during 1) creation of
subscription for initial sync of sequences 2) refresh publication to
synchronize the sequences for the newly created sequences 3) refresh
publication sequences for synchronizing all the sequences.
1) During subscription creation with CREATE SUBSCRIPTION (no syntax change):
- The subscriber retrieves sequences associated with publications.
- Sequences are added in the 'init' state to the pg_subscription_rel table.
- Sequence synchronization worker will be started if there are any
sequences to be synchronized
- A new sequence synchronization worker handles synchronization in
batches of 100 sequences:
a) Retrieves sequence values using pg_sequence_state from the publisher.
b) Sets sequence values accordingly.
c) Updates sequence state to 'READY' in pg_susbcripion_rel
d) Commits batches of 100 synchronized sequences.
2) Refreshing sequences with ALTER SUBSCRIPTION ... REFRESH
PUBLICATION (no syntax change):
- Stale sequences are removed from pg_subscription_rel.
- Newly added sequences in the publisher are added in 'init' state
to pg_subscription_rel.
- Sequence synchronization will be done by sequence sync worker as
listed in subscription creation process.
- Sequence synchronization occurs for newly added sequences only.
3) Introduce new command ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES for refreshing all sequences:
- Removes stale sequences and adds newly added sequences from the
publisher to pg_subscription_rel.
- Resets all sequences in pg_subscription_rel to 'init' state.
- Initiates sequence synchronization for all sequences by sequence
sync worker as listed in subscription creation process.

Here is an updated patch with a few fixes to remove an unused
function, changed a few references of table to sequence and added one
CHECK_FOR_INTERRUPTS in the sequence sync worker loop.

Hi Vignesh,

I have reviewed the patches and I have following comments:

===== tablesync.c ======
1. process_syncing_sequences_for_apply can crash with:
2024-06-21 15:25:17.208 IST [3681269] LOG: logical replication apply
worker for subscription "test1" has started
2024-06-21 15:28:10.127 IST [3682329] LOG: logical replication
sequences synchronization worker for subscription "test1" has started
2024-06-21 15:28:10.146 IST [3682329] LOG: logical replication
synchronization for subscription "test1", sequence "s1" has finished
2024-06-21 15:28:10.149 IST [3682329] LOG: logical replication
synchronization for subscription "test1", sequence "s2" has finished
2024-06-21 15:28:10.149 IST [3682329] LOG: logical replication
sequences synchronization worker for subscription "test1" has finished
2024-06-21 15:29:53.535 IST [3682767] LOG: logical replication
sequences synchronization worker for subscription "test1" has started
TRAP: failed Assert("nestLevel > 0 && (nestLevel <= GUCNestLevel ||
(nestLevel == GUCNestLevel + 1 && !isCommit))"), File: "guc.c", Line:
2273, PID: 3682767
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (ExceptionalCondition+0xbb)[0x5b2a61861c99]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (AtEOXact_GUC+0x7b)[0x5b2a618bddfa]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (RestoreUserContext+0xc7)[0x5b2a618a6937]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1ff7dfa)[0x5b2a61115dfa]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1ff7eb4)[0x5b2a61115eb4]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (SequencesyncWorkerMain+0x33)[0x5b2a61115fe7]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (BackgroundWorkerMain+0x4ad)[0x5b2a61029cae]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (postmaster_child_launch+0x236)[0x5b2a6102fb36]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1f1d12a)[0x5b2a6103b12a]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1f1df0f)[0x5b2a6103bf0f]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1f1bf71)[0x5b2a61039f71]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1f16f73)[0x5b2a61034f73]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (PostmasterMain+0x18fb)[0x5b2a61034445]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (+0x1ab1ab8)[0x5b2a60bcfab8]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7b76bc629d90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7b76bc629e40]
postgres: logical replication sequencesync worker for subscription
16389 sync 0 (_start+0x25)[0x5b2a601491a5]

Analysis:
Suppose there are two sequences (s1, s2) on publisher.
SO, during initial sync.
in loop,
+ foreach(lc, table_states_not_ready)

table_states_not_ready -> it contains both s1 and s2.
So, for s1 a sequence sync will be started. It will sync all sequences
and the sequence sync worker will exit.
Now, for s2 again a sequence sync will start. It will give the above error.

Is this loop required? Instead we can just use a bool like
'is_any_sequence_not_ready'. Thoughts?

===== sequencesync.c =====
2. function name should be 'LogicalRepSyncSequences' instead of
'LogicalRepSyncSeqeunces'

3. In function 'LogicalRepSyncSeqeunces'
sequencerel = table_open(seqinfo->relid, RowExclusiveLock);\
There is a extra '\' symbol

4. In function LogicalRepSyncSeqeunces:
+ ereport(LOG,
+ errmsg("logical replication synchronization for subscription \"%s\",
sequence \"%s\" has finished",
+   get_subscription_name(subid, false), RelationGetRelationName(sequencerel)));
+ table_close(sequencerel, NoLock);
+
+ currseq++;
+
+ if (currseq % MAX_SEQUENCES_SYNC_PER_BATCH == 0 || currseq ==
list_length(sequences))
+ CommitTransactionCommand();

The above message gets logged even if the changes are not committed.
Suppose the sequence worker exits before commit due to some reason.
Thought the log will show that sequence is synced, the sequence will
be in 'init' state. I think this is not desirable.
Maybe we should log the synced sequences at commit time? Thoughts?

===== General ====
5. We can use other macros like 'foreach_ptr' instead of 'foreach'

Thanks for the comments, the attached patch has the fixes for the same.

Regards,
Vignesh

Attachments:

v20240625-0001-Introduce-pg_sequence_state-and-SetSequenc.patchtext/x-patch; charset=US-ASCII; name=v20240625-0001-Introduce-pg_sequence_state-and-SetSequenc.patchDownload
From 4c6daeb460ace314b220a9bf1a5ff59f2afd8ce1 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240625 1/3] Introduce pg_sequence_state and SetSequence
 functions for enhanced sequence management

This patch introduces new functionalities to PostgreSQL:
- pg_sequence_state allows retrieval of sequence values using LSN.
- SetSequence enables updating sequences with user-specified values.
---
 src/backend/commands/sequence.c | 161 ++++++++++++++++++++++++++++++--
 src/include/catalog/pg_proc.dat |   8 ++
 src/include/commands/sequence.h |   1 +
 3 files changed, 162 insertions(+), 8 deletions(-)

diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 28f8522264..57453a7356 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,80 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ */
+void
+SetSequence(Oid seq_relid, int64 value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page' buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* check the comment above nextval_internal()'s equivalent call. */
+	if (RelationNeedsWAL(seqrel))
+	{
+		GetTopTransactionId();
+
+		if (XLogLogicalInfoActive())
+			GetCurrentTransactionId();
+	}
+
+	/* ready to change the on-disk (or really, in-buffer) tuple */
+	START_CRIT_SECTION();
+
+	seq->last_value = value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	MarkBufferDirty(buf);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(seqrel))
+	{
+		xl_seq_rec      xlrec;
+		XLogRecPtr      recptr;
+		Page            page = BufferGetPage(buf);
+
+		XLogBeginInsert();
+		XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+		xlrec.locator = seqrel->rd_locator;
+
+		XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+		XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+		/* allow filtering by origin on a sequence update */
+		XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
+
+		recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buf);
+
+	/* Clear local cache so that we don't think we have cached numbers */
+	/* Note that we do not change the currval() state */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +552,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -551,7 +627,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -680,7 +756,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -976,7 +1052,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1180,7 +1256,8 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn)
 {
 	Page		page;
 	ItemId		lp;
@@ -1197,6 +1274,13 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/*
+	 * If the caller requested it, set the page LSN. This allows deciding
+	 * which sequence changes are before/after the returned sequence state.
+	 */
+	if (lsn)
+		*lsn = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1804,7 +1888,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1819,6 +1903,67 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+	XLogRecPtr	lsn;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4];
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	is_called = seq->is_called;
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	values[0] = LSNGetDatum(lsn);
+	values[1] = Int64GetDatum(last_value);
+	values[2] = Int64GetDatum(log_cnt);
+	values[3] = BoolGetDatum(is_called);
+
+	memset(nulls, 0, sizeof(nulls));
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 6a5476d3c4..990ef2f836 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..fad731a733 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid seq_relid, int64 value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
-- 
2.34.1

v20240625-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240625-0003-Enhance-sequence-synchronization-during-su.patchDownload
From 5509b3ed0acbd06678d34f951b34ee36fbe977fe Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 19 Jun 2024 14:58:14 +0530
Subject: [PATCH v20240625 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
	ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  11 +
 src/backend/catalog/pg_subscription.c         |  64 ++++
 src/backend/commands/subscriptioncmds.c       | 264 +++++++++++++-
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |   9 +
 src/backend/postmaster/bgworker.c             |   3 +
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  50 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 325 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 146 +++++++-
 src/backend/replication/logical/worker.c      |  12 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_subscription.h         |   6 +
 src/include/catalog/pg_subscription_rel.h     |   1 +
 src/include/nodes/parsenodes.h                |   1 +
 src/include/replication/logicalworker.h       |   1 +
 src/include/replication/worker_internal.h     |  13 +
 src/test/subscription/t/034_sequences.pl      | 145 ++++++++
 src/tools/pgindent/typedefs.list              |   1 +
 23 files changed, 1041 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0c7a9082c5..f5c39432f7 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5200,8 +5200,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 746d5bd330..5d9d6f3e50 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1984,8 +1984,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the table synchronization workers, sequence
+    synchronization worker and parallel apply workers.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index b2ad9b446f..5f0170272f 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2009,8 +2009,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 476f195622..fc8a33c0b5 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -27,6 +27,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SKIP ( <replaceable class="parameter">skip_option</replaceable> = <replaceable class="parameter">value</replaceable> )
@@ -194,6 +195,16 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequences information from publisher and re-synchronize the
+      sequence data from the publisher.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..7673f1384c 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -551,3 +552,66 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 	return res;
 }
+
+
+/*
+ * Get the sequences for the subscription.
+ *
+ * The returned list is palloc'ed in the current memory context.
+ */
+List *
+GetSubscriptionSequences(Oid subid, char state)
+{
+	List	   *res = NIL;
+	Relation	rel;
+	HeapTuple	tup;
+	int			nkeys = 0;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	MemoryContext oldctx;
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[nkeys++],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	if (state != '\0')
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(state));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, nkeys, skey);
+
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subseq;
+		SubscriptionRelState *seqinfo;
+		Datum		d;
+		bool		isnull;
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		subseq = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		seqinfo = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
+		seqinfo->relid = subseq->srrelid;
+		d = SysCacheGetAttr(SUBSCRIPTIONRELMAP, tup,
+							Anum_pg_subscription_rel_srsublsn, &isnull);
+		if (isnull)
+			seqinfo->lsn = InvalidXLogRecPtr;
+		else
+			seqinfo->lsn = DatumGetLSN(d);
+
+		res = lappend(res, seqinfo);
+		MemoryContextSwitchTo(oldctx);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	return res;
+}
\ No newline at end of file
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e407428dbc..32e19a739c 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -102,6 +102,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -759,6 +760,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List *sequences;
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -769,6 +771,22 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 */
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * Get the table list from publisher and build local table status
 			 * info.
@@ -898,6 +916,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		/* Get the table list from publisher. */
 		pubrel_names = fetch_table_list(wrconn, sub->publications);
 
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names, fetch_sequence_list(wrconn, sub->publications));
+
 		/* Get local table list. */
 		subrel_states = GetSubscriptionRelations(sub->oid, false);
 		subrel_count = list_length(subrel_states);
@@ -980,6 +1001,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1006,13 +1028,15 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				/* Stop the worker if relation kind is not sequence*/
+				if (relkind != RELKIND_SEQUENCE)
+					logicalrep_worker_stop(sub->oid, relid);
 
 				/*
 				 * For READY state, we would have already dropped the
 				 * tablesync origin.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)
 				{
 					char		originname[NAMEDATALEN];
 
@@ -1047,7 +1071,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		for (off = 0; off < remove_rel_len; off++)
 		{
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE &&
+				get_rel_relkind(sub_remove_rels[off].relid) != RELKIND_SEQUENCE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1077,6 +1102,142 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Refresh the sequences data of the subscription.
+ */
+static void
+AlterSubscription_refreshsequences(Subscription *sub)
+{
+	char	   *err;
+	List	   *pubseq_names = NIL;
+	List	   *subseq_states;
+	Oid		   *subseq_local_oids;
+	Oid		   *pubseq_local_oids;
+	int			off;
+	int			subrel_count;
+	Relation	rel = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	PG_TRY();
+	{
+		/* Get the sequences from the publisher. */
+		pubseq_names = fetch_sequence_list(wrconn, sub->publications);
+
+		/* Get local sequence list. */
+		subseq_states = GetSubscriptionSequences(sub->oid, '\0');
+		subrel_count = list_length(subseq_states);
+
+		/*
+		 * Build qsorted array of local sequence oids for faster lookup. This
+		 * can potentially contain all sequences in the database so speed of
+		 * lookup is important.
+		 */
+		subseq_local_oids = palloc(subrel_count * sizeof(Oid));
+		off = 0;
+		foreach_ptr(SubscriptionSeqInfo, seqinfo, subseq_states)
+			subseq_local_oids[off++] = seqinfo->seqid;
+
+		qsort(subseq_local_oids, subrel_count, sizeof(Oid), oid_cmp);
+
+		/*
+		 * Walk over the remote sequences and try to match them to locally
+		 * known sequences. If the sequence is not known locally create a new
+		 * state for it.
+		 *
+		 * Also builds array of local oids of remote sequences for the next
+		 * step.
+		 */
+		off = 0;
+		pubseq_local_oids = palloc(list_length(pubseq_names) * sizeof(Oid));
+
+		foreach_ptr(RangeVar, rv, pubseq_names)
+		{
+			Oid			relid;
+
+			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+			/* Check for supported relkind. */
+			CheckSubscriptionRelkind(get_rel_relkind(relid),
+									 rv->schemaname, rv->relname);
+
+			pubseq_local_oids[off++] = relid;
+
+			if (!bsearch(&relid, subseq_local_oids,
+						 subrel_count, sizeof(Oid), oid_cmp))
+			{
+				AddSubscriptionRelState(sub->oid, relid,
+										SUBREL_STATE_INIT,
+										InvalidXLogRecPtr, true);
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" added to subscription \"%s\"",
+										 rv->schemaname, rv->relname, sub->name)));
+			}
+		}
+
+		/*
+		 * Next remove state for sequences we should not care about anymore
+		 * using the data we collected above
+		 */
+		qsort(pubseq_local_oids, list_length(pubseq_names),
+			  sizeof(Oid), oid_cmp);
+
+		for (off = 0; off < subrel_count; off++)
+		{
+			Oid			relid = subseq_local_oids[off];
+
+			if (!bsearch(&relid, pubseq_local_oids,
+						 list_length(pubseq_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * This locking ensures that the state of rels won't change
+				 * till we are done with this refresh operation.
+				 */
+				if (!rel)
+					rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+				RemoveSubscriptionRel(sub->oid, relid);
+
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+			}
+			else
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
+			}
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+
+	if (rel)
+		table_close(rel, NoLock);
+}
+
 /*
  * Alter the existing subscription.
  */
@@ -1404,6 +1565,20 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refreshsequences(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_REFRESH:
 			{
 				if (!sub->enabled)
@@ -2060,11 +2235,17 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
+		char	   *schemaname;
+		char	   *tablename;
+
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			schemaname = get_namespace_name(get_rel_namespace(relid));
+			tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2234,6 +2415,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	ListCell   *lc;
+	bool		first;
+	List	   *tablelist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "  FROM pg_catalog.pg_publication_sequences s\n"
+						   " WHERE s.pubname IN (");
+	first = true;
+	foreach(lc, publications)
+	{
+		char	   *pubname = strVal(lfirst(lc));
+
+		if (first)
+			first = false;
+		else
+			appendStringInfoString(&cmd, ", ");
+
+		appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+	}
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of replicated sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		tablelist = lappend(tablelist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return tablelist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index d99285dfa3..78acd3a0d2 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10823,6 +10823,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b37ccb85ad..ac68e5a609 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -132,6 +132,9 @@ static const struct
 	},
 	{
 		"TablesyncWorkerMain", TablesyncWorkerMain
+	},
+	{
+		"SequencesyncWorkerMain", SequencesyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 27c3a91fb7..466771d775 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -267,6 +267,39 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 	return res;
 }
 
+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
+{
+	int			i;
+	LogicalRepWorker *res = NULL;
+
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	/* Search for attached worker for a given subscription id. */
+	for (i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		/* Skip parallel apply workers. */
+		if (!isSequencesyncWorker(w))
+			continue;
+
+		if (w->in_use && w->subid == subid && (only_running && w->proc))
+		{
+			res = w;
+			break;
+		}
+	}
+
+	return res;
+}
+
 /*
  * Similar to logicalrep_worker_find(), but returns a list of all workers for
  * the subscription, instead of just one.
@@ -311,6 +344,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -320,7 +354,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - parallel apply worker is the only kind of subworker
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
-	Assert(is_tablesync_worker == OidIsValid(relid));
+	Assert(is_tablesync_worker == OidIsValid(relid) || is_sequencesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
 
 	ereport(DEBUG1,
@@ -396,7 +430,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -491,6 +526,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequencesyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -1351,6 +1394,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..7ed6b5e7f8
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,325 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/lsyscache.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * Fetch sequence data (current state) from the remote node, including the
+ * page LSN.
+ */
+static int64
+fetch_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {INT8OID, LSNOID};
+	int64		value = (Datum) 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT (last_value + log_cnt), page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		bool		isnull;
+
+		value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		*lsn = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char			relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for table \"%s.%s\" from publisher: %s",
+						nspname, RelationGetRelationName(rel), res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	/*
+	 * Logical replication of sequences is based on decoding WAL records,
+	 * describing the "next" state of the sequence the current state in the
+	 * relfilenode is yet to reach. But during the initial sync we read the
+	 * current state, so we need to reconstruct the WAL record logged when we
+	 * started the current batch of sequence values.
+	 *
+	 * Otherwise we might get duplicate values (on subscriber) if we failed
+	 * over right after the sync.
+	 */
+	sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
+
+	/* sets the sequences in non-transactional way */
+	SetSequence(RelationGetRelid(rel), sequence_value);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences()
+{
+	char	   *err;
+	bool		must_use_password;
+	List *sequences;
+	char	   slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner;
+	int 		curr_seq = 0;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	/* Get the sequences that should be synchronized. */
+	StartTransactionCommand();
+	sequences = GetSubscriptionSequences(subid,
+										 SUBREL_STATE_INIT);
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+
+	seq_count = list_length(sequences);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences)
+	{
+		Relation	sequencerel;
+		XLogRecPtr	sequence_lsn;
+		int			next_seq;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)
+			StartTransactionCommand();
+
+		sequencerel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless the
+		 * user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequencerel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequence sync worker has permission to insert into the
+		 * target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequencerel), GetUserId(),
+									ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						get_relkind_objtype(sequencerel->rd_rel->relkind),
+						RelationGetRelationName(sequencerel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or superuser,
+		 * who has it implicitly), but other roles should not be able to
+		 * circumvent RLS.  Disallow logical replication into RLS enabled
+		 * relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into relation with row-level security enabled: \"%s\"",
+							GetUserNameFromId(GetUserId(), true),
+							RelationGetRelationName(sequencerel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequencerel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequencerel, NoLock);
+
+		next_seq = curr_seq + 1;
+		if (((next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) || next_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			int i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);
+			for (; i <= curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			CommitTransactionCommand();
+		}
+
+		curr_seq++;
+	}
+
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if it's required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication Sequencesync worker entry point */
+void
+SequencesyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(false);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b00267f042..5541187353 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -139,9 +139,9 @@ static StringInfo copybuf = NULL;
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(bool istable)
 {
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
@@ -157,10 +157,15 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (istable)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequences synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
@@ -387,7 +392,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(true);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -463,6 +468,17 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	foreach(lc, table_states_not_ready)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind == RELKIND_SEQUENCE)
+			continue;
 
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
@@ -477,11 +493,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -660,6 +671,106 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence syncronization worker running already, no need to
+ * start a sequence synchronization in this case. The existing sequence
+ * sync worker will syncronize the sequences. If there are still any sequences
+ * to be synced after the sequence sync worker exited, then we new sequence
+ * sync worker can be started in the next iteration. To prevent starting the
+ * seqeuence sync worker at a high frequency after a failure, we store its last
+ * start time. We start the sync worker for the same relation after waiting
+ * at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription tables here. */
+	FetchTableStates(&started_tx);
+
+	/*
+	 * Start sequence sync worker if there is no sequence sync worker running.
+	 */
+	foreach_ptr(SubscriptionRelState, rstate, table_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
+															true);
+		/*
+		 * If there is a sequence sync worker, the sequence sync worker
+		 * will handle sync of this sequence.
+		 */
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int	nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence sync
+			 * worker to sync the sequences and break from the loop, as this
+			 * sequence sync worker will take care of synchronizing all the
+			 * sequences that are in init state.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											InvalidOid,
+											DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -682,9 +793,16 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			process_syncing_sequences_for_apply();
 			process_syncing_tables_for_apply(current_lsn);
 			break;
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1320,7 +1438,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(true);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1716,7 +1834,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(true);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index b5a80fe3e8..1211de1e27 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,12 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4631,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequences synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4646,7 +4656,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index f1ee348909..a2e8bd9d44 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 0aa14ec4a2..8c96f0ce72 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -159,6 +159,12 @@ typedef struct Subscription
 								 * specified origin */
 } Subscription;
 
+typedef struct SubscriptionSeqInfo
+{
+	Oid			seqid;
+	XLogRecPtr	lsn;
+} SubscriptionSeqInfo;
+
 /* Disallow streaming in-progress transactions. */
 #define LOGICALREP_STREAM_OFF 'f'
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..3cf7834f8d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -91,5 +91,6 @@ extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionSequences(Oid subid, char state);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 62d1cf47e2..76485b2a60 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4215,6 +4215,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..f380c1ba60 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -19,6 +19,7 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
 extern void TablesyncWorkerMain(Datum main_arg);
+extern void SequencesyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 515aefd519..23b4267598 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -32,6 +32,7 @@ typedef enum LogicalRepWorkerType
 	WORKERTYPE_TABLESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
+	WORKERTYPE_SEQUENCESYNC,
 } LogicalRepWorkerType;
 
 typedef struct LogicalRepWorker
@@ -240,6 +241,8 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
+extern LogicalRepWorker *logicalrep_sequence_sync_worker_find(Oid subid,
+															  bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running);
 extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
@@ -252,6 +255,8 @@ extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(bool istable);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -329,6 +334,8 @@ extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
 #define isTablesyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequencesyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
@@ -336,6 +343,12 @@ am_tablesync_worker(void)
 	return isTablesyncWorker(MyLogicalRepWorker);
 }
 
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequencesyncWorker(MyLogicalRepWorker);
+}
+
 static inline bool
 am_leader_apply_worker(void)
 {
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..94bf83a14b
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,145 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+	CREATE SEQUENCE s2;
+	CREATE SEQUENCE s3;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION seq_pub FOR ALL SEQUENCES");
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
+);
+
+# Wait for initial sync to finish as well
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# create a new sequence, it should be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s2;
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# changes to existing sequences should not be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+# Refresh publication after create a new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# Changes of both new and existing sequence should be synced after REFRESH
+# PUBLICATION SEQUENCES.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s3;
+	INSERT INTO seq_test SELECT nextval('s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# Refresh publication sequences after create new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION SEQUENCES
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '231|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s3;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 61ad417cde..7e10347fe0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2768,6 +2768,7 @@ SubscriptingRefState
 Subscription
 SubscriptionInfo
 SubscriptionRelState
+SubscriptionSeqInfo
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
 SupportRequestIndexCondition
-- 
2.34.1

v20240625-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240625-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From c4b630ac757919105656864bf23b2900a64df2ee Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240625 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications. This improvement facilitates seamless
synchronization of sequence data during operations such as
CREATE SUBSCRIPTION, REFRESH PUBLICATION, and REFRESH PUBLICATION SEQUENCES.

Additionally, a new system view, pg_publication_sequences, has been
introduced to list all sequences added to a publication. Furthermore,
enhancements to psql commands (\d and \dRp) now allow for better display
of publications containing specific sequences or sequences included in a
publication.
---
 doc/src/sgml/ref/create_publication.sgml  |  23 +-
 doc/src/sgml/system-views.sgml            |  67 ++++
 src/backend/catalog/pg_publication.c      |  86 ++++-
 src/backend/catalog/system_views.sql      |  10 +
 src/backend/commands/publicationcmds.c    |  54 ++-
 src/backend/parser/gram.y                 |  22 +-
 src/bin/pg_dump/pg_dump.c                 |  21 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  11 +
 src/bin/psql/describe.c                   | 218 ++++++++---
 src/bin/psql/tab-complete.c               |   4 +-
 src/include/catalog/pg_proc.dat           |   5 +
 src/include/catalog/pg_publication.h      |   9 +
 src/include/nodes/parsenodes.h            |   6 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 449 ++++++++++++----------
 src/test/regress/expected/rules.out       |   8 +
 src/test/regress/sql/publication.sql      |  15 +
 18 files changed, 712 insertions(+), 303 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..107ce63b2b 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -119,10 +124,11 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
    <varlistentry id="sql-createpublication-params-for-all-tables">
     <term><literal>FOR ALL TABLES</literal></term>
+    <term><literal>FOR ALL SEQUENCES</literal></term>
     <listitem>
      <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
+      Marks the publication as one that replicates changes for all tables or
+      sequences in the database, including tables created in the future.
      </para>
     </listitem>
    </varlistentry>
@@ -240,10 +246,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR ALL SEQUENCES</literal>,or <literal>FOR TABLES IN SCHEMA</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,7 +265,8 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR ALL SEQUENCES</command>, and
    <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
    user to be a superuser.
   </para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 8c18bea902..7491d50dc4 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2138,6 +2143,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..175caf23d0 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
@@ -1254,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index efb29adeb3..1057946dc0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -398,6 +398,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..475256f9e4 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -727,6 +727,7 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 ObjectAddress
 CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 {
+	ListCell   *lc;
 	Relation	rel;
 	ObjectAddress myself;
 	Oid			puboid;
@@ -741,6 +742,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	bool		for_all_tables = false;
+	bool		for_all_sequences = false;
+
+	/*
+	 * Translate the list of object types (represented by strings) to bool
+	 * flags.
+	 */
+	foreach(lc, stmt->for_all_objects)
+	{
+		char	   *val = strVal(lfirst(lc));
+
+		if (strcmp(val, "tables") == 0)
+			for_all_tables = true;
+		else if (strcmp(val, "sequences") == 0)
+			for_all_sequences = true;
+	}
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
@@ -748,11 +766,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 					   get_database_name(MyDatabaseId));
 
 	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	if (for_all_tables && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 				 errmsg("must be superuser to create FOR ALL TABLES publication")));
 
+	/* FOR ALL SEQUENCES requires superuser */
+	if (for_all_sequences && !superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("must be superuser to create FOR ALL SEQUENCES publication")));
+
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
 	/* Check if name is used */
@@ -782,7 +806,9 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+		BoolGetDatum(for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,12 +834,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (for_all_tables || for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+
+	/*
+	 * If the publication might have either tables or sequences (directly or
+	 * through a schema), process that.
+	 */
+	if (!for_all_tables || !for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1008,7 +1039,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1525,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1749,7 +1780,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, rels)
 	{
@@ -1828,7 +1859,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, schemas)
 	{
@@ -1919,6 +1950,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 							NameStr(form->pubname)),
 					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
 
+		if (form->puballsequences && !superuser_arg(newOwnerId))
+			ereport(ERROR,
+					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					 errmsg("permission denied to change owner of publication \"%s\"",
+							NameStr(form->pubname)),
+					 errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 4d582950b7..d99285dfa3 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -455,7 +455,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <node>	pub_obj_type
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10557,6 +10558,8 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION FOR ALL TABLES [WITH options]
  *
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
+ *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
  * pub_obj is one of:
@@ -10575,13 +10578,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
 					n->options = $7;
-					n->for_all_tables = true;
+					n->for_all_objects = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10696,19 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+pub_obj_type:	TABLES
+					{ $$ = (Node *) makeString("tables"); }
+				| SEQUENCES
+					{ $$ = (Node *) makeString("sequences"); }
+	;
+
+pub_obj_type_list:	pub_obj_type
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' pub_obj_type
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e324070828..953b9128ad 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4185,6 +4185,7 @@ getPublications(Archive *fout, int *numPublications)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4204,23 +4205,29 @@ getPublications(Archive *fout, int *numPublications)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 170000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4232,6 +4239,7 @@ getPublications(Archive *fout, int *numPublications)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4251,6 +4259,8 @@ getPublications(Archive *fout, int *numPublications)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4304,6 +4314,9 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
 
+	if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
 	{
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 865823868f..976115ab3d 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..ace1d1b661 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,17 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index f67bf0b892..482c756b3a 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,52 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* print any publications */
+		if (pset.sversion >= 170000)
+		{
+			int			tuples = 0;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+			else
+				tuples = PQntuples(result);
+
+			if (tuples > 0)
+				printTableAddFooter(&cont, _("Publications:"));
+
+			/* Might be an empty set - that's ok */
+			for (i = 0; i < tuples; i++)
+			{
+				printfPQExpBuffer(&buf, "    \"%s\"",
+								  PQgetvalue(result, i, 0));
+
+				printTableAddFooter(&cont, buf.data);
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2110,11 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6219,7 +6281,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6233,19 +6295,37 @@ listPublications(const char *pattern)
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT pubname AS \"%s\",\n"
-					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
-					  gettext_noop("Name"),
-					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
-					  gettext_noop("Inserts"),
-					  gettext_noop("Updates"),
-					  gettext_noop("Deletes"));
+	if (pset.sversion >= 170000)
+		printfPQExpBuffer(&buf,
+						  "SELECT pubname AS \"%s\",\n"
+						  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+						  "  puballtables AS \"%s\",\n"
+						  "  puballsequences AS \"%s\",\n"
+						  "  pubinsert AS \"%s\",\n"
+						  "  pubupdate AS \"%s\",\n"
+						  "  pubdelete AS \"%s\"",
+						  gettext_noop("Name"),
+						  gettext_noop("Owner"),
+						  gettext_noop("All tables"),
+						  gettext_noop("All sequences"),
+						  gettext_noop("Inserts"),
+						  gettext_noop("Updates"),
+						  gettext_noop("Deletes"));
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT pubname AS \"%s\",\n"
+						  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+						  "  puballtables AS \"%s\",\n"
+						  "  pubinsert AS \"%s\",\n"
+						  "  pubupdate AS \"%s\",\n"
+						  "  pubdelete AS \"%s\"",
+						  gettext_noop("Name"),
+						  gettext_noop("Owner"),
+						  gettext_noop("All tables"),
+						  gettext_noop("Inserts"),
+						  gettext_noop("Updates"),
+						  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6343,6 +6423,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6359,6 +6440,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 170000);
 
 	initPQExpBuffer(&buf);
 
@@ -6372,6 +6454,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6423,6 +6509,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols ++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6430,6 +6518,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6440,6 +6530,10 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
+
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index d453e224d9..f1ee348909 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,9 +3159,9 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
 		COMPLETE_WITH("TABLES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 990ef2f836..8e68adb01f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11945,6 +11945,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..8cc0fe9d0d 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -148,6 +155,8 @@ extern List *GetPubPartitionOptionRelations(List *result,
 extern Oid	GetTopMostAncestorInPublication(Oid puboid, List *ancestors,
 											int *ancestor_level);
 
+extern  List *GetAllSequencesPublicationRelations(void);
+
 extern bool is_publishable_relation(Relation rel);
 extern bool is_schema_publication(Oid pubid);
 extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *pri,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..62d1cf47e2 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4168,7 +4168,8 @@ typedef struct CreatePublicationStmt
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
@@ -4191,7 +4192,8 @@ typedef struct AlterPublicationStmt
 	 * objects.
 	 */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 	AlterPublicationAction action;	/* What action to perform with the given
 									 * objects */
 } AlterPublicationStmt;
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..d54008ae6f 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,53 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+         pubname         | puballtables | puballsequences 
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f            | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+                       Sequence "pub_test.testpub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_forallsequences"
+
+\dRp+ testpub_forallsequences
+                                    Publication testpub_forallsequences
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +272,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +290,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +322,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +338,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +357,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +368,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +404,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +417,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +535,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +752,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +939,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1147,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1188,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1269,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1282,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1311,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1337,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1408,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1419,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1440,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1452,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1464,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1475,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1486,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1497,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1528,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1540,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1622,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1643,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 13178e2b3d..1c646f15a1 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1441,6 +1441,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..e9fc959c8b 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,21 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
+
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
-- 
2.34.1

#58Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#57)
1 attachment(s)
Re: Logical Replication of sequences

Here are my initial review comments for the first patch v20240625-0001.

======
General

1. Missing docs?

Section 9.17. "Sequence Manipulation Functions" [1]https://www.postgresql.org/docs/devel/functions-sequence.html describes some
functions. Shouldn't your new function be documented here also?

~~~

2. Missing tests?

Shouldn't there be some test code that at least executes your new
pg_sequence_state function to verify that sane values are returned?

======
Commit Message

3.
This patch introduces new functionalities to PostgreSQL:
- pg_sequence_state allows retrieval of sequence values using LSN.
- SetSequence enables updating sequences with user-specified values.

~

3a.
I didn't understand why this says "using LSN" because IIUC 'lsn' is an
output parameter of that function. Don't you mean "... retrieval of
sequence values including LSN"?

~

3b.
Does "user-specified" make sense? Is this going to be exposed to a
user? How about just "specified"?

======
src/backend/commands/sequence.c

4. SetSequence:

+void
+SetSequence(Oid seq_relid, int64 value)

Would 'new_last_value' be a better parameter name here?

~~~

5.
This new function logic looks pretty similar to the do_setval()
function. Can you explain (maybe in the function comment) some info
about how and why it differs from that other function?

~~~

6.
I saw that RelationNeedsWAL() is called 2 times. It may make no sense,
but is it possible to assign that to a variable 1st time so you don't
need to call it 2nd time within the critical section?

~~~

NITPICK - remove junk (') char in comment

NITPICK - missing periods (.) in multi-sentence comment

~~~

7.
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+    XLogRecPtr *lsn)

7a.
The existing parameters were described in the function comment. So,
the new 'lsn' parameter should be described here also.

~

7b.
Maybe the new parameter name should be 'lsn_res' or 'lsn_out' or
similar to emphasise that this is a returned value.

~~

NITPICK - tweaked comment. YMMV.

~~~

8. pg_sequence_state:

Should you give descriptions of the output parameters in the function
header comment? Otherwise, where are they described so called knows
what they mean?

~~~

NITPICK - /relid/seq_relid/

NITPICK - declare the variables in the same order as the output parameters

NITPICK - An alternative to the memset for nulls is just to use static
initialisation
"bool nulls[4] = {false, false, false, false};"

======
+extern void SetSequence(Oid seq_relid, int64 value);

9.
Would 'SetSequenceLastValue' be a better name for what this function is doing?

======

99.
See also my attached diff which is a top-up patch implementing those
nitpicks mentioned above. Please apply any of these that you agree
with.

======
[1]: https://www.postgresql.org/docs/devel/functions-sequence.html

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240626_SEQ_0001.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240626_SEQ_0001.txtDownload
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 57453a7..9bad121 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -349,7 +349,7 @@ SetSequence(Oid seq_relid, int64 value)
 	/* open and lock sequence */
 	init_sequence(seq_relid, &elm, &seqrel);
 
-	/* lock page' buffer and read tuple */
+	/* lock page buffer and read tuple */
 	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	/* check the comment above nextval_internal()'s equivalent call. */
@@ -397,8 +397,10 @@ SetSequence(Oid seq_relid, int64 value)
 
 	UnlockReleaseBuffer(buf);
 
-	/* Clear local cache so that we don't think we have cached numbers */
-	/* Note that we do not change the currval() state */
+	/*
+	 * Clear local cache so that we don't think we have cached numbers.
+	 * Note that we do not change the currval() state.
+	 */
 	elm->cached = elm->last;
 
 	relation_close(seqrel, NoLock);
@@ -1275,8 +1277,9 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
 			 RelationGetRelationName(rel), sm->magic);
 
 	/*
-	 * If the caller requested it, set the page LSN. This allows deciding
-	 * which sequence changes are before/after the returned sequence state.
+	 * If the caller requested it, return the page LSN. This allows the
+	 * caller to determine which sequence changes are before/after the
+	 * returned sequence state.
 	 */
 	if (lsn)
 		*lsn = PageGetLSN(page);
@@ -1912,7 +1915,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 Datum
 pg_sequence_state(PG_FUNCTION_ARGS)
 {
-	Oid			relid = PG_GETARG_OID(0);
+	Oid			seq_relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
 	Buffer		buf;
@@ -1920,21 +1923,21 @@ pg_sequence_state(PG_FUNCTION_ARGS)
 	Form_pg_sequence_data seq;
 	Datum		result;
 
+	XLogRecPtr	lsn;
 	int64		last_value;
 	int64		log_cnt;
 	bool		is_called;
-	XLogRecPtr	lsn;
 
 	TupleDesc	tupdesc;
 	HeapTuple	tuple;
 	Datum		values[4];
-	bool		nulls[4];
+	bool		nulls[4] = {false, false, false, false};
 
 	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
 		elog(ERROR, "return type must be a row type");
 
 	/* open and lock sequence */
-	init_sequence(relid, &elm, &seqrel);
+	init_sequence(seq_relid, &elm, &seqrel);
 
 	if (pg_class_aclcheck(elm->relid, GetUserId(),
 						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
@@ -1957,8 +1960,6 @@ pg_sequence_state(PG_FUNCTION_ARGS)
 	values[2] = Int64GetDatum(log_cnt);
 	values[3] = BoolGetDatum(is_called);
 
-	memset(nulls, 0, sizeof(nulls));
-
 	tuple = heap_form_tuple(tupdesc, values, nulls);
 	result = HeapTupleGetDatum(tuple);
 
#59Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#57)
1 attachment(s)
Re: Logical Replication of sequences

Here are some review comments for the patch v20240625-0002

======
Commit Message

1.
This commit enhances logical replication by enabling the inclusion of all
sequences in publications. This improvement facilitates seamless
synchronization of sequence data during operations such as
CREATE SUBSCRIPTION, REFRESH PUBLICATION, and REFRESH PUBLICATION SEQUENCES.

~

Isn't this description getting ahead of the functionality a bit? For
example, it talks about operations like REFRESH PUBLICATION SEQUENCES
but AFAIK that syntax does not exist just yet.

~~~

2.
The commit message should mention that you are only introducing new
syntax for "FOR ALL SEQUENCES" here, but syntax for "FOR SEQUENCE" is
being deferred to some later patch. Without such a note it is not
clear why the gram.y syntax and docs seemed only half done.

======
doc/src/sgml/ref/create_publication.sgml

3.
    <varlistentry id="sql-createpublication-params-for-all-tables">
     <term><literal>FOR ALL TABLES</literal></term>
+    <term><literal>FOR ALL SEQUENCES</literal></term>
     <listitem>
      <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
+      Marks the publication as one that replicates changes for all tables or
+      sequences in the database, including tables created in the future.

It might be better here to keep descriptions for "ALL TABLES" and "ALL
SEQUENCES" separated, otherwise the wording does not quite seem
appropriate for sequences (e.g. where it says "including tables
created in the future").

~~~

NITPICK - missing spaces
NITPICK - removed Oxford commas since previously there were none

~~~

4.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR ALL SEQUENCES</literal>,or <literal>FOR TABLES IN
SCHEMA</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.

It seems like "FOR ALL SEQUENCES" is out of place since it is jammed
between other clauses referring to TABLES. Would it be better to
mention SEQUENCES last in the list?

~~~

5.
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR ALL SEQUENCES</command>, and
    <command>FOR TABLES IN SCHEMA</command> clauses require the invoking

ditto of #4 above.

======
src/backend/catalog/pg_publication.c

GetAllSequencesPublicationRelations:

NITPICK - typo /relation/relations/

======
src/backend/commands/publicationcmds.c

6.
+ foreach(lc, stmt->for_all_objects)
+ {
+ char    *val = strVal(lfirst(lc));
+
+ if (strcmp(val, "tables") == 0)
+ for_all_tables = true;
+ else if (strcmp(val, "sequences") == 0)
+ for_all_sequences = true;
+ }

Consider the foreach_ptr macro to slightly simplify this code.
Actually, this whole logic seems cumbersome -- can’t the parser assign
flags automatically. Please see my more detailed comment #10 below
about this in gram.y

~~~

7.
  /* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ if (for_all_tables && !superuser())
  ereport(ERROR,
  (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
  errmsg("must be superuser to create FOR ALL TABLES publication")));
+ /* FOR ALL SEQUENCES requires superuser */
+ if (for_all_sequences && !superuser())
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("must be superuser to create FOR ALL SEQUENCES publication")));
+

The current code is easy to read, but I wonder if it should try harder
to share common code, or at least a common translatable message like
"must be superuser to create %s publication".

~~~

8.
- else
+
+ /*
+ * If the publication might have either tables or sequences (directly or
+ * through a schema), process that.
+ */
+ if (!for_all_tables || !for_all_sequences)

I did not understand why this code cannot just say "else" like before,
because the direct or through-schema syntax cannot be specified at the
same time as "FOR ALL ...", so why is the more complicated condition
necessary? Also, the similar code in AlterPublicationOptions() was not
changed to be like this.

======
src/backend/parser/gram.y

9. comment

  *
  * CREATE PUBLICATION FOR ALL TABLES [WITH options]
  *
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
+ *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]

The comment is not quite correct because actually you are allowing
simultaneous FOR ALL TABLES, SEQUENCES. It should be more like:

CREATE PUBLICATION FOR ALL pub_obj_type [,...] [WITH options]

pub_obj_type is one of:
TABLES
SEQUENCES

~~~

10.
+pub_obj_type: TABLES
+ { $$ = (Node *) makeString("tables"); }
+ | SEQUENCES
+ { $$ = (Node *) makeString("sequences"); }
+ ;
+
+pub_obj_type_list: pub_obj_type
+ { $$ = list_make1($1); }
+ | pub_obj_type_list ',' pub_obj_type
+ { $$ = lappend($1, $3); }
+ ;

IIUC the only thing you need is a flag to say if FOR ALL TABLE is in
effect and another flag to say if FOR ALL SEQUENCES is in effect. So,
It seemed clunky to build up a temporary list of "tables" or
"sequences" strings here, which is subsequently scanned by
CreatePublication to be turned back into booleans.

Can't we just change the CreatePublicationStmt field to have:

A) a 'for_all_types' bitmask instead of a list:
0x0000 means FOR ALL is not specified
0x0001 means ALL TABLES
0x0010 means ALL SEQUENCES

Or, B) have 2 boolean fields ('for_all_tables' and 'for_all_sequences')

...where the gram.y code can be written to assign the flag/s values directly?

======
src/bin/pg_dump/pg_dump.c

11.
if (pubinfo->puballtables)
appendPQExpBufferStr(query, " FOR ALL TABLES");

+ if (pubinfo->puballsequences)
+ appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+

Hmm. Is that correct? It looks like a possible bug, because if both
flags are true it will give invalid syntax like "FOR ALL TABLES FOR
ALL SEQUENCES" instead of "FOR ALL TABLES, SEQUENCES"

======
src/bin/pg_dump/t/002_pg_dump.pl

12.
This could also try the test scenario of both FOR ALL being
simultaneously set ("FOR ALL TABLES, SEQUENCES") to check for bugs
like the suspected one in dump.c review comment #11 above.

======
src/bin/psql/describe.c

13.
+ if (pset.sversion >= 170000)
+ printfPQExpBuffer(&buf,
+   "SELECT pubname AS \"%s\",\n"
+   "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+   "  puballtables AS \"%s\",\n"
+   "  puballsequences AS \"%s\",\n"
+   "  pubinsert AS \"%s\",\n"
+   "  pubupdate AS \"%s\",\n"
+   "  pubdelete AS \"%s\"",
+   gettext_noop("Name"),
+   gettext_noop("Owner"),
+   gettext_noop("All tables"),
+   gettext_noop("All sequences"),
+   gettext_noop("Inserts"),
+   gettext_noop("Updates"),
+   gettext_noop("Deletes"));
+ else
+ printfPQExpBuffer(&buf,
+   "SELECT pubname AS \"%s\",\n"
+   "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+   "  puballtables AS \"%s\",\n"
+   "  pubinsert AS \"%s\",\n"
+   "  pubupdate AS \"%s\",\n"
+   "  pubdelete AS \"%s\"",
+   gettext_noop("Name"),
+   gettext_noop("Owner"),
+   gettext_noop("All tables"),
+   gettext_noop("Inserts"),
+   gettext_noop("Updates"),
+   gettext_noop("Deletes"));
+

IMO this should be coded differently so that only the
"puballsequences" column is guarded by the (pset.sversion >= 170000),
and everything else is the same as before. This suggested way would
also be consistent with the existing code version checks (e.g. for
"pubtruncate" or for "pubviaroot").

~~~

NITPICK - Add blank lines
NITPICK - space in "ncols ++"

======
src/bin/psql/tab-complete.c

14.
Hmm. When I tried this, it didn't seem to be working properly.

For example "CREATE PUBLICATION pub1 FOR ALL" only completes with
"TABLES" but not "SEQUENCES".
For example "CREATE PUBLICATION pub1 FOR ALL SEQ" doesn't complete
"SEQUENCES" properly

======
src/include/catalog/pg_publication.h

NITPICK - move the extern to be adjacent to others like it.

======
src/include/nodes/parsenodes.h

15.
- bool for_all_tables; /* Special publication for all tables in db */
+ List    *for_all_objects; /* Special publication for all objects in
+ * db */
 } CreatePublicationStmt;

I felt this List logic is a bit strange. See my comment #10 in gram.y
for more details.

~~~

16.
- bool for_all_tables; /* Special publication for all tables in db */
+ List    *for_all_objects; /* Special publication for all objects in
+ * db */

Ditto comment #15 in AlterPublicationStmt

======
src/test/regress/sql/publication.sql

17.
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication
WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1

Should you also do "\d+ tespub_seq0" here? Otherwise what was the
point of defining the seq0 sequence being in this test?

~~~

18.
Maybe there are missing test cases for different syntax combinations like:

FOR ALL TABLES, SEQUENCES
FOR ALL SEQUENCES, TABLES

Note that the current list logic of this patch even considers my
following bogus statement syntax is OK.

test_pub=# CREATE PUBLICATION pub_silly FOR ALL TABLES, SEQUENCES,
TABLES, TABLES, TABLES, SEQUENCES;
CREATE PUBLICATION
test_pub=#

======
99.
Please also refer to the attached nitpicks patch which implements all
the cosmetic issues identified above as NITPICKS.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240701_SEQ_0002.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240701_SEQ_0002.txtDownload
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 107ce63..de66031 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -247,7 +247,7 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
-   <literal>FOR ALL SEQUENCES</literal>,or <literal>FOR TABLES IN SCHEMA</literal>
+   <literal>FOR ALL SEQUENCES</literal> or <literal>FOR TABLES IN SCHEMA</literal>
    are not specified, then the publication starts out with an empty set of
    tables.  That is useful if tables or schemas are to be added later.
   </para>
@@ -266,7 +266,7 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <para>
    To add a table to a publication, the invoking user must have ownership
    rights on the table.  The <command>FOR ALL TABLES</command>,
-   <command>FOR ALL SEQUENCES</command>, and
+   <command>FOR ALL SEQUENCES</command> and
    <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
    user to be a superuser.
   </para>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 175caf2..998a840 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -999,7 +999,7 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 }
 
 /*
- * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
  */
 List *
 GetAllSequencesPublicationRelations(void)
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 482c756..a609ab5 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -2113,6 +2113,7 @@ describeOneTableDetails(const char *schemaname,
 	res = PSQLexec(buf.data);
 	if (!res)
 		goto error_return;
+
 	numrows = PQntuples(res);
 
 	/* Generate table cells to be printed */
@@ -6510,7 +6511,7 @@ describePublications(const char *pattern)
 		if (has_pubviaroot)
 			ncols++;
 		if (has_pubsequence)
-			ncols ++;
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 8cc0fe9..4b402a6 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -143,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
@@ -155,8 +156,6 @@ extern List *GetPubPartitionOptionRelations(List *result,
 extern Oid	GetTopMostAncestorInPublication(Oid puboid, List *ancestors,
 											int *ancestor_level);
 
-extern  List *GetAllSequencesPublicationRelations(void);
-
 extern bool is_publishable_relation(Relation rel);
 extern bool is_schema_publication(Oid pubid);
 extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *pri,
#60vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#58)
3 attachment(s)
Re: Logical Replication of sequences

On Wed, 26 Jun 2024 at 14:41, Peter Smith <smithpb2250@gmail.com> wrote:

Here are my initial review comments for the first patch v20240625-0001.

======
General

6.
I saw that RelationNeedsWAL() is called 2 times. It may make no sense,
but is it possible to assign that to a variable 1st time so you don't
need to call it 2nd time within the critical section?

I felt this is ok, we do similarly in other places also like
fill_seq_fork_with_data function in the same file.

I have fixed the other comments and merged the nitpicks changes. The
attached patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20240702-0001-Introduce-pg_sequence_state-and-SetSequenc.patchtext/x-patch; charset=US-ASCII; name=v20240702-0001-Introduce-pg_sequence_state-and-SetSequenc.patchDownload
From eb27cb2a11d8e59fc3f065ed4500efe55d75c278 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240702] Introduce pg_sequence_state and
 SetSequenceLastValue functions for enhanced sequence management

This patch introduces new functionalities to PostgreSQL:
- pg_sequence_state allows retrieval of sequence values including LSN.
- SetSequenceLastValue enables updating sequences with specified values.
---
 doc/src/sgml/func.sgml                 |  27 ++++
 src/backend/commands/sequence.c        | 174 +++++++++++++++++++++++--
 src/include/catalog/pg_proc.dat        |   8 ++
 src/include/commands/sequence.h        |   1 +
 src/test/regress/expected/sequence.out |  12 ++
 src/test/regress/sql/sequence.sql      |   2 +
 6 files changed, 216 insertions(+), 8 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 45e6eb0415..2fc95ef3e1 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19506,6 +19506,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ()
+        <returnvalue>record</returnvalue>
+        ( <parameter>lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>lsn</literal> is the
+        page lsn of the sequence, <literal>last_value</literal> is the value
+        most recently returned by <function>nextval</function> in the current
+        session, <literal>log_cnt</literal> shows how many fetches remain
+        before a new WAL record has to be written and
+        <literal>is_called</literal> indicates whethere the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index b4ad19c053..cd7639f6a7 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,83 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ *
+ * Note: This function resembles do_setval but does not include the locking and
+ * verification steps, as those are managed in a slightly diffrent manner for
+ * logical replication.
+ */
+void
+SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* check the comment above nextval_internal()'s equivalent call. */
+	if (RelationNeedsWAL(seqrel))
+	{
+		GetTopTransactionId();
+
+		if (XLogLogicalInfoActive())
+			GetCurrentTransactionId();
+	}
+
+	/* ready to change the on-disk (or really, in-buffer) tuple */
+	START_CRIT_SECTION();
+
+	seq->last_value = new_last_value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	MarkBufferDirty(buf);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(seqrel))
+	{
+		xl_seq_rec      xlrec;
+		XLogRecPtr      recptr;
+		Page            page = BufferGetPage(buf);
+
+		XLogBeginInsert();
+		XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+		xlrec.locator = seqrel->rd_locator;
+
+		XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+		XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+		recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buf);
+
+	/*
+	 * Clear local cache so that we don't think we have cached numbers.
+	 * Note that we do not change the currval() state.
+	 */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +555,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +637,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +766,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +1062,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1262,13 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to page LSN if the caller requested it
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1285,14 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/*
+	 * If the caller requested it, return the page LSN. This allows the
+	 * caller to determine which sequence changes are before/after the
+	 * returned sequence state.
+	 */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1811,7 +1900,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1826,6 +1915,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	is_called = seq->is_called;
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page lsn for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/*
+	 * Shows how many fetches remain before a new WAL record has to be
+	 * written.
+	 */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whethere the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 6a5476d3c4..990ef2f836 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..003f2e3413 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 2b47b7796b..cbcd65f499 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 674f5f1f66..5fcb36341d 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240702-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240702-0003-Enhance-sequence-synchronization-during-su.patchDownload
From c85d2c08ceea5a30400bdb4efb23705eb0361178 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 19 Jun 2024 14:58:14 +0530
Subject: [PATCH v20240702 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
	ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  11 +
 src/backend/catalog/pg_subscription.c         |  64 ++++
 src/backend/commands/subscriptioncmds.c       | 264 +++++++++++++-
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |   9 +
 src/backend/postmaster/bgworker.c             |   3 +
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  50 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 325 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 146 +++++++-
 src/backend/replication/logical/worker.c      |  12 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_subscription.h         |   6 +
 src/include/catalog/pg_subscription_rel.h     |   1 +
 src/include/nodes/parsenodes.h                |   1 +
 src/include/replication/logicalworker.h       |   1 +
 src/include/replication/worker_internal.h     |  13 +
 src/test/subscription/t/034_sequences.pl      | 145 ++++++++
 src/tools/pgindent/typedefs.list              |   1 +
 23 files changed, 1041 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0c7a9082c5..f5c39432f7 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5200,8 +5200,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 746d5bd330..5d9d6f3e50 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1984,8 +1984,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the table synchronization workers, sequence
+    synchronization worker and parallel apply workers.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 991f629907..62870aa41b 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 476f195622..fc8a33c0b5 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -27,6 +27,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SKIP ( <replaceable class="parameter">skip_option</replaceable> = <replaceable class="parameter">value</replaceable> )
@@ -194,6 +195,16 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequences information from publisher and re-synchronize the
+      sequence data from the publisher.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..7673f1384c 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -551,3 +552,66 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 	return res;
 }
+
+
+/*
+ * Get the sequences for the subscription.
+ *
+ * The returned list is palloc'ed in the current memory context.
+ */
+List *
+GetSubscriptionSequences(Oid subid, char state)
+{
+	List	   *res = NIL;
+	Relation	rel;
+	HeapTuple	tup;
+	int			nkeys = 0;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	MemoryContext oldctx;
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[nkeys++],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	if (state != '\0')
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(state));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, nkeys, skey);
+
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subseq;
+		SubscriptionRelState *seqinfo;
+		Datum		d;
+		bool		isnull;
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		subseq = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		seqinfo = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
+		seqinfo->relid = subseq->srrelid;
+		d = SysCacheGetAttr(SUBSCRIPTIONRELMAP, tup,
+							Anum_pg_subscription_rel_srsublsn, &isnull);
+		if (isnull)
+			seqinfo->lsn = InvalidXLogRecPtr;
+		else
+			seqinfo->lsn = DatumGetLSN(d);
+
+		res = lappend(res, seqinfo);
+		MemoryContextSwitchTo(oldctx);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	return res;
+}
\ No newline at end of file
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e407428dbc..32e19a739c 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -102,6 +102,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -759,6 +760,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List *sequences;
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -769,6 +771,22 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 */
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * Get the table list from publisher and build local table status
 			 * info.
@@ -898,6 +916,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		/* Get the table list from publisher. */
 		pubrel_names = fetch_table_list(wrconn, sub->publications);
 
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names, fetch_sequence_list(wrconn, sub->publications));
+
 		/* Get local table list. */
 		subrel_states = GetSubscriptionRelations(sub->oid, false);
 		subrel_count = list_length(subrel_states);
@@ -980,6 +1001,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1006,13 +1028,15 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				/* Stop the worker if relation kind is not sequence*/
+				if (relkind != RELKIND_SEQUENCE)
+					logicalrep_worker_stop(sub->oid, relid);
 
 				/*
 				 * For READY state, we would have already dropped the
 				 * tablesync origin.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)
 				{
 					char		originname[NAMEDATALEN];
 
@@ -1047,7 +1071,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		for (off = 0; off < remove_rel_len; off++)
 		{
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE &&
+				get_rel_relkind(sub_remove_rels[off].relid) != RELKIND_SEQUENCE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1077,6 +1102,142 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Refresh the sequences data of the subscription.
+ */
+static void
+AlterSubscription_refreshsequences(Subscription *sub)
+{
+	char	   *err;
+	List	   *pubseq_names = NIL;
+	List	   *subseq_states;
+	Oid		   *subseq_local_oids;
+	Oid		   *pubseq_local_oids;
+	int			off;
+	int			subrel_count;
+	Relation	rel = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	PG_TRY();
+	{
+		/* Get the sequences from the publisher. */
+		pubseq_names = fetch_sequence_list(wrconn, sub->publications);
+
+		/* Get local sequence list. */
+		subseq_states = GetSubscriptionSequences(sub->oid, '\0');
+		subrel_count = list_length(subseq_states);
+
+		/*
+		 * Build qsorted array of local sequence oids for faster lookup. This
+		 * can potentially contain all sequences in the database so speed of
+		 * lookup is important.
+		 */
+		subseq_local_oids = palloc(subrel_count * sizeof(Oid));
+		off = 0;
+		foreach_ptr(SubscriptionSeqInfo, seqinfo, subseq_states)
+			subseq_local_oids[off++] = seqinfo->seqid;
+
+		qsort(subseq_local_oids, subrel_count, sizeof(Oid), oid_cmp);
+
+		/*
+		 * Walk over the remote sequences and try to match them to locally
+		 * known sequences. If the sequence is not known locally create a new
+		 * state for it.
+		 *
+		 * Also builds array of local oids of remote sequences for the next
+		 * step.
+		 */
+		off = 0;
+		pubseq_local_oids = palloc(list_length(pubseq_names) * sizeof(Oid));
+
+		foreach_ptr(RangeVar, rv, pubseq_names)
+		{
+			Oid			relid;
+
+			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+			/* Check for supported relkind. */
+			CheckSubscriptionRelkind(get_rel_relkind(relid),
+									 rv->schemaname, rv->relname);
+
+			pubseq_local_oids[off++] = relid;
+
+			if (!bsearch(&relid, subseq_local_oids,
+						 subrel_count, sizeof(Oid), oid_cmp))
+			{
+				AddSubscriptionRelState(sub->oid, relid,
+										SUBREL_STATE_INIT,
+										InvalidXLogRecPtr, true);
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" added to subscription \"%s\"",
+										 rv->schemaname, rv->relname, sub->name)));
+			}
+		}
+
+		/*
+		 * Next remove state for sequences we should not care about anymore
+		 * using the data we collected above
+		 */
+		qsort(pubseq_local_oids, list_length(pubseq_names),
+			  sizeof(Oid), oid_cmp);
+
+		for (off = 0; off < subrel_count; off++)
+		{
+			Oid			relid = subseq_local_oids[off];
+
+			if (!bsearch(&relid, pubseq_local_oids,
+						 list_length(pubseq_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * This locking ensures that the state of rels won't change
+				 * till we are done with this refresh operation.
+				 */
+				if (!rel)
+					rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+				RemoveSubscriptionRel(sub->oid, relid);
+
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+			}
+			else
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
+			}
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+
+	if (rel)
+		table_close(rel, NoLock);
+}
+
 /*
  * Alter the existing subscription.
  */
@@ -1404,6 +1565,20 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refreshsequences(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_REFRESH:
 			{
 				if (!sub->enabled)
@@ -2060,11 +2235,17 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
+		char	   *schemaname;
+		char	   *tablename;
+
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			schemaname = get_namespace_name(get_rel_namespace(relid));
+			tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2234,6 +2415,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	ListCell   *lc;
+	bool		first;
+	List	   *tablelist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "  FROM pg_catalog.pg_publication_sequences s\n"
+						   " WHERE s.pubname IN (");
+	first = true;
+	foreach(lc, publications)
+	{
+		char	   *pubname = strVal(lfirst(lc));
+
+		if (first)
+			first = false;
+		else
+			appendStringInfoString(&cmd, ", ");
+
+		appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+	}
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of replicated sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		tablelist = lappend(tablelist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return tablelist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 7b0337d24b..916814fda2 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10823,6 +10823,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b37ccb85ad..ac68e5a609 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -132,6 +132,9 @@ static const struct
 	},
 	{
 		"TablesyncWorkerMain", TablesyncWorkerMain
+	},
+	{
+		"SequencesyncWorkerMain", SequencesyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 27c3a91fb7..466771d775 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -267,6 +267,39 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 	return res;
 }
 
+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
+{
+	int			i;
+	LogicalRepWorker *res = NULL;
+
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	/* Search for attached worker for a given subscription id. */
+	for (i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		/* Skip parallel apply workers. */
+		if (!isSequencesyncWorker(w))
+			continue;
+
+		if (w->in_use && w->subid == subid && (only_running && w->proc))
+		{
+			res = w;
+			break;
+		}
+	}
+
+	return res;
+}
+
 /*
  * Similar to logicalrep_worker_find(), but returns a list of all workers for
  * the subscription, instead of just one.
@@ -311,6 +344,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -320,7 +354,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - parallel apply worker is the only kind of subworker
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
-	Assert(is_tablesync_worker == OidIsValid(relid));
+	Assert(is_tablesync_worker == OidIsValid(relid) || is_sequencesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
 
 	ereport(DEBUG1,
@@ -396,7 +430,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -491,6 +526,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequencesyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -1351,6 +1394,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..7f609fc0e9
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,325 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/lsyscache.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * Fetch sequence data (current state) from the remote node, including the
+ * page LSN.
+ */
+static int64
+fetch_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {INT8OID, LSNOID};
+	int64		value = (Datum) 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT (last_value + log_cnt), page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		bool		isnull;
+
+		value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		*lsn = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char			relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for table \"%s.%s\" from publisher: %s",
+						nspname, RelationGetRelationName(rel), res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	/*
+	 * Logical replication of sequences is based on decoding WAL records,
+	 * describing the "next" state of the sequence the current state in the
+	 * relfilenode is yet to reach. But during the initial sync we read the
+	 * current state, so we need to reconstruct the WAL record logged when we
+	 * started the current batch of sequence values.
+	 *
+	 * Otherwise we might get duplicate values (on subscriber) if we failed
+	 * over right after the sync.
+	 */
+	sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
+
+	/* sets the sequences in non-transactional way */
+	SetSequenceLastValue(RelationGetRelid(rel), sequence_value);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences()
+{
+	char	   *err;
+	bool		must_use_password;
+	List *sequences;
+	char	   slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner;
+	int 		curr_seq = 0;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	/* Get the sequences that should be synchronized. */
+	StartTransactionCommand();
+	sequences = GetSubscriptionSequences(subid,
+										 SUBREL_STATE_INIT);
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+
+	seq_count = list_length(sequences);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences)
+	{
+		Relation	sequencerel;
+		XLogRecPtr	sequence_lsn;
+		int			next_seq;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)
+			StartTransactionCommand();
+
+		sequencerel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless the
+		 * user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequencerel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequence sync worker has permission to insert into the
+		 * target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequencerel), GetUserId(),
+									ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						get_relkind_objtype(sequencerel->rd_rel->relkind),
+						RelationGetRelationName(sequencerel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or superuser,
+		 * who has it implicitly), but other roles should not be able to
+		 * circumvent RLS.  Disallow logical replication into RLS enabled
+		 * relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into relation with row-level security enabled: \"%s\"",
+							GetUserNameFromId(GetUserId(), true),
+							RelationGetRelationName(sequencerel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequencerel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequencerel, NoLock);
+
+		next_seq = curr_seq + 1;
+		if (((next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) || next_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			int i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);
+			for (; i <= curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			CommitTransactionCommand();
+		}
+
+		curr_seq++;
+	}
+
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if it's required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication Sequencesync worker entry point */
+void
+SequencesyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(false);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b00267f042..5541187353 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -139,9 +139,9 @@ static StringInfo copybuf = NULL;
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(bool istable)
 {
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
@@ -157,10 +157,15 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (istable)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequences synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
@@ -387,7 +392,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(true);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -463,6 +468,17 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	foreach(lc, table_states_not_ready)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind == RELKIND_SEQUENCE)
+			continue;
 
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
@@ -477,11 +493,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -660,6 +671,106 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence syncronization worker running already, no need to
+ * start a sequence synchronization in this case. The existing sequence
+ * sync worker will syncronize the sequences. If there are still any sequences
+ * to be synced after the sequence sync worker exited, then we new sequence
+ * sync worker can be started in the next iteration. To prevent starting the
+ * seqeuence sync worker at a high frequency after a failure, we store its last
+ * start time. We start the sync worker for the same relation after waiting
+ * at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription tables here. */
+	FetchTableStates(&started_tx);
+
+	/*
+	 * Start sequence sync worker if there is no sequence sync worker running.
+	 */
+	foreach_ptr(SubscriptionRelState, rstate, table_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
+															true);
+		/*
+		 * If there is a sequence sync worker, the sequence sync worker
+		 * will handle sync of this sequence.
+		 */
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int	nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence sync
+			 * worker to sync the sequences and break from the loop, as this
+			 * sequence sync worker will take care of synchronizing all the
+			 * sequences that are in init state.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											InvalidOid,
+											DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -682,9 +793,16 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			process_syncing_sequences_for_apply();
 			process_syncing_tables_for_apply(current_lsn);
 			break;
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1320,7 +1438,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(true);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1716,7 +1834,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(true);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index b5a80fe3e8..1211de1e27 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,12 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4631,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequences synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4646,7 +4656,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index f1ee348909..a2e8bd9d44 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 0aa14ec4a2..8c96f0ce72 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -159,6 +159,12 @@ typedef struct Subscription
 								 * specified origin */
 } Subscription;
 
+typedef struct SubscriptionSeqInfo
+{
+	Oid			seqid;
+	XLogRecPtr	lsn;
+} SubscriptionSeqInfo;
+
 /* Disallow streaming in-progress transactions. */
 #define LOGICALREP_STREAM_OFF 'f'
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..3cf7834f8d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -91,5 +91,6 @@ extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionSequences(Oid subid, char state);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 62d1cf47e2..76485b2a60 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4215,6 +4215,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..f380c1ba60 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -19,6 +19,7 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
 extern void TablesyncWorkerMain(Datum main_arg);
+extern void SequencesyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 515aefd519..23b4267598 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -32,6 +32,7 @@ typedef enum LogicalRepWorkerType
 	WORKERTYPE_TABLESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
+	WORKERTYPE_SEQUENCESYNC,
 } LogicalRepWorkerType;
 
 typedef struct LogicalRepWorker
@@ -240,6 +241,8 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
+extern LogicalRepWorker *logicalrep_sequence_sync_worker_find(Oid subid,
+															  bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running);
 extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
@@ -252,6 +255,8 @@ extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(bool istable);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -329,6 +334,8 @@ extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
 #define isTablesyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequencesyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
@@ -336,6 +343,12 @@ am_tablesync_worker(void)
 	return isTablesyncWorker(MyLogicalRepWorker);
 }
 
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequencesyncWorker(MyLogicalRepWorker);
+}
+
 static inline bool
 am_leader_apply_worker(void)
 {
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..94bf83a14b
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,145 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+	CREATE SEQUENCE s2;
+	CREATE SEQUENCE s3;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION seq_pub FOR ALL SEQUENCES");
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
+);
+
+# Wait for initial sync to finish as well
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# create a new sequence, it should be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s2;
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# changes to existing sequences should not be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+# Refresh publication after create a new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# Changes of both new and existing sequence should be synced after REFRESH
+# PUBLICATION SEQUENCES.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s3;
+	INSERT INTO seq_test SELECT nextval('s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# Refresh publication sequences after create new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION SEQUENCES
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '231|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s3;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 82b3b411fb..ff92e036b9 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2768,6 +2768,7 @@ SubscriptingRefState
 Subscription
 SubscriptionInfo
 SubscriptionRelState
+SubscriptionSeqInfo
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
 SupportRequestIndexCondition
-- 
2.34.1

v20240702-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240702-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From b1705b4e23ec7bdf9b0022832a6c37aaa037cc57 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240702 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications. This improvement facilitates seamless
synchronization of sequence data during operations such as
CREATE SUBSCRIPTION, REFRESH PUBLICATION, and REFRESH PUBLICATION SEQUENCES.

Additionally, a new system view, pg_publication_sequences, has been
introduced to list all sequences added to a publication. Furthermore,
enhancements to psql commands (\d and \dRp) now allow for better display
of publications containing specific sequences or sequences included in a
publication.
---
 doc/src/sgml/ref/create_publication.sgml  |  23 +-
 doc/src/sgml/system-views.sgml            |  67 ++++
 src/backend/catalog/pg_publication.c      |  86 ++++-
 src/backend/catalog/system_views.sql      |  10 +
 src/backend/commands/publicationcmds.c    |  54 ++-
 src/backend/parser/gram.y                 |  22 +-
 src/bin/pg_dump/pg_dump.c                 |  21 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  11 +
 src/bin/psql/describe.c                   | 218 ++++++++---
 src/bin/psql/tab-complete.c               |   4 +-
 src/include/catalog/pg_proc.dat           |   5 +
 src/include/catalog/pg_publication.h      |   9 +
 src/include/nodes/parsenodes.h            |   6 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 449 ++++++++++++----------
 src/test/regress/expected/rules.out       |   8 +
 src/test/regress/sql/publication.sql      |  15 +
 18 files changed, 712 insertions(+), 303 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..107ce63b2b 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -119,10 +124,11 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
    <varlistentry id="sql-createpublication-params-for-all-tables">
     <term><literal>FOR ALL TABLES</literal></term>
+    <term><literal>FOR ALL SEQUENCES</literal></term>
     <listitem>
      <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
+      Marks the publication as one that replicates changes for all tables or
+      sequences in the database, including tables created in the future.
      </para>
     </listitem>
    </varlistentry>
@@ -240,10 +246,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR ALL SEQUENCES</literal>,or <literal>FOR TABLES IN SCHEMA</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,7 +265,8 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR ALL SEQUENCES</command>, and
    <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
    user to be a superuser.
   </para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 8c18bea902..7491d50dc4 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2138,6 +2143,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..175caf23d0 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
@@ -1254,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index efb29adeb3..1057946dc0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -398,6 +398,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..475256f9e4 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -727,6 +727,7 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 ObjectAddress
 CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 {
+	ListCell   *lc;
 	Relation	rel;
 	ObjectAddress myself;
 	Oid			puboid;
@@ -741,6 +742,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	bool		for_all_tables = false;
+	bool		for_all_sequences = false;
+
+	/*
+	 * Translate the list of object types (represented by strings) to bool
+	 * flags.
+	 */
+	foreach(lc, stmt->for_all_objects)
+	{
+		char	   *val = strVal(lfirst(lc));
+
+		if (strcmp(val, "tables") == 0)
+			for_all_tables = true;
+		else if (strcmp(val, "sequences") == 0)
+			for_all_sequences = true;
+	}
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
@@ -748,11 +766,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 					   get_database_name(MyDatabaseId));
 
 	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	if (for_all_tables && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 				 errmsg("must be superuser to create FOR ALL TABLES publication")));
 
+	/* FOR ALL SEQUENCES requires superuser */
+	if (for_all_sequences && !superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("must be superuser to create FOR ALL SEQUENCES publication")));
+
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
 	/* Check if name is used */
@@ -782,7 +806,9 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+		BoolGetDatum(for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,12 +834,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (for_all_tables || for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+
+	/*
+	 * If the publication might have either tables or sequences (directly or
+	 * through a schema), process that.
+	 */
+	if (!for_all_tables || !for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1008,7 +1039,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1525,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1749,7 +1780,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, rels)
 	{
@@ -1828,7 +1859,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, schemas)
 	{
@@ -1919,6 +1950,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 							NameStr(form->pubname)),
 					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
 
+		if (form->puballsequences && !superuser_arg(newOwnerId))
+			ereport(ERROR,
+					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					 errmsg("permission denied to change owner of publication \"%s\"",
+							NameStr(form->pubname)),
+					 errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..7b0337d24b 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -455,7 +455,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <node>	pub_obj_type
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10557,6 +10558,8 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION FOR ALL TABLES [WITH options]
  *
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
+ *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
  * pub_obj is one of:
@@ -10575,13 +10578,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
 					n->options = $7;
-					n->for_all_tables = true;
+					n->for_all_objects = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10696,19 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+pub_obj_type:	TABLES
+					{ $$ = (Node *) makeString("tables"); }
+				| SEQUENCES
+					{ $$ = (Node *) makeString("sequences"); }
+	;
+
+pub_obj_type_list:	pub_obj_type
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' pub_obj_type
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e324070828..953b9128ad 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4185,6 +4185,7 @@ getPublications(Archive *fout, int *numPublications)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4204,23 +4205,29 @@ getPublications(Archive *fout, int *numPublications)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 170000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4232,6 +4239,7 @@ getPublications(Archive *fout, int *numPublications)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4251,6 +4259,8 @@ getPublications(Archive *fout, int *numPublications)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4304,6 +4314,9 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
 
+	if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
 	{
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 865823868f..976115ab3d 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..ace1d1b661 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,17 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index f67bf0b892..482c756b3a 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,52 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* print any publications */
+		if (pset.sversion >= 170000)
+		{
+			int			tuples = 0;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+			else
+				tuples = PQntuples(result);
+
+			if (tuples > 0)
+				printTableAddFooter(&cont, _("Publications:"));
+
+			/* Might be an empty set - that's ok */
+			for (i = 0; i < tuples; i++)
+			{
+				printfPQExpBuffer(&buf, "    \"%s\"",
+								  PQgetvalue(result, i, 0));
+
+				printTableAddFooter(&cont, buf.data);
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2110,11 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6219,7 +6281,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6233,19 +6295,37 @@ listPublications(const char *pattern)
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT pubname AS \"%s\",\n"
-					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
-					  gettext_noop("Name"),
-					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
-					  gettext_noop("Inserts"),
-					  gettext_noop("Updates"),
-					  gettext_noop("Deletes"));
+	if (pset.sversion >= 170000)
+		printfPQExpBuffer(&buf,
+						  "SELECT pubname AS \"%s\",\n"
+						  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+						  "  puballtables AS \"%s\",\n"
+						  "  puballsequences AS \"%s\",\n"
+						  "  pubinsert AS \"%s\",\n"
+						  "  pubupdate AS \"%s\",\n"
+						  "  pubdelete AS \"%s\"",
+						  gettext_noop("Name"),
+						  gettext_noop("Owner"),
+						  gettext_noop("All tables"),
+						  gettext_noop("All sequences"),
+						  gettext_noop("Inserts"),
+						  gettext_noop("Updates"),
+						  gettext_noop("Deletes"));
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT pubname AS \"%s\",\n"
+						  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+						  "  puballtables AS \"%s\",\n"
+						  "  pubinsert AS \"%s\",\n"
+						  "  pubupdate AS \"%s\",\n"
+						  "  pubdelete AS \"%s\"",
+						  gettext_noop("Name"),
+						  gettext_noop("Owner"),
+						  gettext_noop("All tables"),
+						  gettext_noop("Inserts"),
+						  gettext_noop("Updates"),
+						  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6343,6 +6423,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6359,6 +6440,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 170000);
 
 	initPQExpBuffer(&buf);
 
@@ -6372,6 +6454,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6423,6 +6509,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols ++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6430,6 +6518,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6440,6 +6530,10 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
+
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index d453e224d9..f1ee348909 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,9 +3159,9 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
 		COMPLETE_WITH("TABLES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 990ef2f836..8e68adb01f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11945,6 +11945,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..8cc0fe9d0d 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -148,6 +155,8 @@ extern List *GetPubPartitionOptionRelations(List *result,
 extern Oid	GetTopMostAncestorInPublication(Oid puboid, List *ancestors,
 											int *ancestor_level);
 
+extern  List *GetAllSequencesPublicationRelations(void);
+
 extern bool is_publishable_relation(Relation rel);
 extern bool is_schema_publication(Oid pubid);
 extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *pri,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..62d1cf47e2 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4168,7 +4168,8 @@ typedef struct CreatePublicationStmt
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
@@ -4191,7 +4192,8 @@ typedef struct AlterPublicationStmt
 	 * objects.
 	 */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 	AlterPublicationAction action;	/* What action to perform with the given
 									 * objects */
 } AlterPublicationStmt;
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..d54008ae6f 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,53 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+         pubname         | puballtables | puballsequences 
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f            | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+                       Sequence "pub_test.testpub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_forallsequences"
+
+\dRp+ testpub_forallsequences
+                                    Publication testpub_forallsequences
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +272,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +290,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +322,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +338,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +357,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +368,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +404,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +417,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +535,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +752,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +939,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1147,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1188,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1269,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1282,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1311,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1337,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1408,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1419,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1440,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1452,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1464,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1475,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1486,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1497,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1528,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1540,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1622,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1643,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 13178e2b3d..1c646f15a1 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1441,6 +1441,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..e9fc959c8b 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,21 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
+
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
-- 
2.34.1

#61Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#60)
1 attachment(s)
Re: Logical Replication of sequences

Here are my comments for patch v20240702-0001

They are all cosmetic and/or typos. Apart from these the 0001 patch LGTM.

======
doc/src/sgml/func.sgml

Section 9.17. Sequence Manipulation Functions

pg_sequence_state:
nitpick - typo /whethere/whether/
nitpick - reworded slightly using a ChatGPT suggestion. (YMMV, so it
is fine also if you prefer the current wording)

======
src/backend/commands/sequence.c

SetSequenceLastValue:
nitpick - typo in function comment /diffrent/different/

pg_sequence_state:
nitpick - function comment wording: /page LSN/the page LSN/
nitpick - moved some comment details about 'lsn_ret' into the function header
nitpick - rearranged variable assignments to have consistent order
with the values
nitpick - tweaked comments
nitpick - typo /whethere/whether/

======
99.
Please see the attached diffs patch which implements all those
nitpicks mentioned above.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240703_SEQ_0001.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240703_SEQ_0001.txtDownload
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 3dd4805..83f87c1 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19521,11 +19521,11 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
        </para>
        <para>
         Returns information about the sequence. <literal>lsn</literal> is the
-        page lsn of the sequence, <literal>last_value</literal> is the value
-        most recently returned by <function>nextval</function> in the current
+        page lsn of the sequence, <literal>last_value</literal> is the most
+        recent value returned by <function>nextval</function> in the current
         session, <literal>log_cnt</literal> shows how many fetches remain
-        before a new WAL record has to be written and
-        <literal>is_called</literal> indicates whethere the sequence has been
+        before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
         used.
        </para>
        <para>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cd7639f..f33f689 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -338,7 +338,7 @@ ResetSequence(Oid seq_relid)
  * responsible for permissions checking.
  *
  * Note: This function resembles do_setval but does not include the locking and
- * verification steps, as those are managed in a slightly diffrent manner for
+ * verification steps, as those are managed in a slightly different manner for
  * logical replication.
  */
 void
@@ -1262,7 +1262,9 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
- * *lsn_ret will be set to page LSN if the caller requested it
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
@@ -1285,11 +1287,7 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
-	/*
-	 * If the caller requested it, return the page LSN. This allows the
-	 * caller to determine which sequence changes are before/after the
-	 * returned sequence state.
-	 */
+	/* If the caller requested it, return the page LSN. */
 	if (lsn_ret)
 		*lsn_ret = PageGetLSN(page);
 
@@ -1957,9 +1955,9 @@ pg_sequence_state(PG_FUNCTION_ARGS)
 
 	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
 
-	is_called = seq->is_called;
 	last_value = seq->last_value;
 	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
 
 	UnlockReleaseBuffer(buf);
 	relation_close(seqrel, NoLock);
@@ -1970,13 +1968,10 @@ pg_sequence_state(PG_FUNCTION_ARGS)
 	/* The value most recently returned by nextval in the current session */
 	values[1] = Int64GetDatum(last_value);
 
-	/*
-	 * Shows how many fetches remain before a new WAL record has to be
-	 * written.
-	 */
+	/* How many fetches remain before a new WAL record has to be written */
 	values[2] = Int64GetDatum(log_cnt);
 
-	/* Indicates whethere the sequence has been used */
+	/* Indicates whether the sequence has been used */
 	values[3] = BoolGetDatum(is_called);
 
 	tuple = heap_form_tuple(tupdesc, values, nulls);
#62vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#59)
3 attachment(s)
Re: Logical Replication of sequences

On Mon, 1 Jul 2024 at 12:57, Peter Smith <smithpb2250@gmail.com> wrote:

Here are some review comments for the patch v20240625-0002

======
Commit Message

1.
This commit enhances logical replication by enabling the inclusion of all
sequences in publications. This improvement facilitates seamless
synchronization of sequence data during operations such as
CREATE SUBSCRIPTION, REFRESH PUBLICATION, and REFRESH PUBLICATION SEQUENCES.

~

Isn't this description getting ahead of the functionality a bit? For
example, it talks about operations like REFRESH PUBLICATION SEQUENCES
but AFAIK that syntax does not exist just yet.

~~~

2.
The commit message should mention that you are only introducing new
syntax for "FOR ALL SEQUENCES" here, but syntax for "FOR SEQUENCE" is
being deferred to some later patch. Without such a note it is not
clear why the gram.y syntax and docs seemed only half done.

======
doc/src/sgml/ref/create_publication.sgml

3.
<varlistentry id="sql-createpublication-params-for-all-tables">
<term><literal>FOR ALL TABLES</literal></term>
+    <term><literal>FOR ALL SEQUENCES</literal></term>
<listitem>
<para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
+      Marks the publication as one that replicates changes for all tables or
+      sequences in the database, including tables created in the future.

It might be better here to keep descriptions for "ALL TABLES" and "ALL
SEQUENCES" separated, otherwise the wording does not quite seem
appropriate for sequences (e.g. where it says "including tables
created in the future").

~~~

NITPICK - missing spaces
NITPICK - removed Oxford commas since previously there were none

~~~

4.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR ALL SEQUENCES</literal>,or <literal>FOR TABLES IN
SCHEMA</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.

It seems like "FOR ALL SEQUENCES" is out of place since it is jammed
between other clauses referring to TABLES. Would it be better to
mention SEQUENCES last in the list?

~~~

5.
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR ALL SEQUENCES</command>, and
<command>FOR TABLES IN SCHEMA</command> clauses require the invoking

ditto of #4 above.

======
src/backend/catalog/pg_publication.c

GetAllSequencesPublicationRelations:

NITPICK - typo /relation/relations/

======
src/backend/commands/publicationcmds.c

6.
+ foreach(lc, stmt->for_all_objects)
+ {
+ char    *val = strVal(lfirst(lc));
+
+ if (strcmp(val, "tables") == 0)
+ for_all_tables = true;
+ else if (strcmp(val, "sequences") == 0)
+ for_all_sequences = true;
+ }

Consider the foreach_ptr macro to slightly simplify this code.
Actually, this whole logic seems cumbersome -- can’t the parser assign
flags automatically. Please see my more detailed comment #10 below
about this in gram.y

~~~

7.
/* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ if (for_all_tables && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES publication")));
+ /* FOR ALL SEQUENCES requires superuser */
+ if (for_all_sequences && !superuser())
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("must be superuser to create FOR ALL SEQUENCES publication")));
+

The current code is easy to read, but I wonder if it should try harder
to share common code, or at least a common translatable message like
"must be superuser to create %s publication".

~~~

8.
- else
+
+ /*
+ * If the publication might have either tables or sequences (directly or
+ * through a schema), process that.
+ */
+ if (!for_all_tables || !for_all_sequences)

I did not understand why this code cannot just say "else" like before,
because the direct or through-schema syntax cannot be specified at the
same time as "FOR ALL ...", so why is the more complicated condition
necessary? Also, the similar code in AlterPublicationOptions() was not
changed to be like this.

======
src/backend/parser/gram.y

9. comment

*
* CREATE PUBLICATION FOR ALL TABLES [WITH options]
*
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
+ *
* CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]

The comment is not quite correct because actually you are allowing
simultaneous FOR ALL TABLES, SEQUENCES. It should be more like:

CREATE PUBLICATION FOR ALL pub_obj_type [,...] [WITH options]

pub_obj_type is one of:
TABLES
SEQUENCES

~~~

10.
+pub_obj_type: TABLES
+ { $$ = (Node *) makeString("tables"); }
+ | SEQUENCES
+ { $$ = (Node *) makeString("sequences"); }
+ ;
+
+pub_obj_type_list: pub_obj_type
+ { $$ = list_make1($1); }
+ | pub_obj_type_list ',' pub_obj_type
+ { $$ = lappend($1, $3); }
+ ;

IIUC the only thing you need is a flag to say if FOR ALL TABLE is in
effect and another flag to say if FOR ALL SEQUENCES is in effect. So,
It seemed clunky to build up a temporary list of "tables" or
"sequences" strings here, which is subsequently scanned by
CreatePublication to be turned back into booleans.

Can't we just change the CreatePublicationStmt field to have:

A) a 'for_all_types' bitmask instead of a list:
0x0000 means FOR ALL is not specified
0x0001 means ALL TABLES
0x0010 means ALL SEQUENCES

Or, B) have 2 boolean fields ('for_all_tables' and 'for_all_sequences')

...where the gram.y code can be written to assign the flag/s values directly?

======
src/bin/pg_dump/pg_dump.c

11.
if (pubinfo->puballtables)
appendPQExpBufferStr(query, " FOR ALL TABLES");

+ if (pubinfo->puballsequences)
+ appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+

Hmm. Is that correct? It looks like a possible bug, because if both
flags are true it will give invalid syntax like "FOR ALL TABLES FOR
ALL SEQUENCES" instead of "FOR ALL TABLES, SEQUENCES"

======
src/bin/pg_dump/t/002_pg_dump.pl

12.
This could also try the test scenario of both FOR ALL being
simultaneously set ("FOR ALL TABLES, SEQUENCES") to check for bugs
like the suspected one in dump.c review comment #11 above.

======
src/bin/psql/describe.c

13.
+ if (pset.sversion >= 170000)
+ printfPQExpBuffer(&buf,
+   "SELECT pubname AS \"%s\",\n"
+   "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+   "  puballtables AS \"%s\",\n"
+   "  puballsequences AS \"%s\",\n"
+   "  pubinsert AS \"%s\",\n"
+   "  pubupdate AS \"%s\",\n"
+   "  pubdelete AS \"%s\"",
+   gettext_noop("Name"),
+   gettext_noop("Owner"),
+   gettext_noop("All tables"),
+   gettext_noop("All sequences"),
+   gettext_noop("Inserts"),
+   gettext_noop("Updates"),
+   gettext_noop("Deletes"));
+ else
+ printfPQExpBuffer(&buf,
+   "SELECT pubname AS \"%s\",\n"
+   "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+   "  puballtables AS \"%s\",\n"
+   "  pubinsert AS \"%s\",\n"
+   "  pubupdate AS \"%s\",\n"
+   "  pubdelete AS \"%s\"",
+   gettext_noop("Name"),
+   gettext_noop("Owner"),
+   gettext_noop("All tables"),
+   gettext_noop("Inserts"),
+   gettext_noop("Updates"),
+   gettext_noop("Deletes"));
+

IMO this should be coded differently so that only the
"puballsequences" column is guarded by the (pset.sversion >= 170000),
and everything else is the same as before. This suggested way would
also be consistent with the existing code version checks (e.g. for
"pubtruncate" or for "pubviaroot").

~~~

NITPICK - Add blank lines
NITPICK - space in "ncols ++"

======
src/bin/psql/tab-complete.c

14.
Hmm. When I tried this, it didn't seem to be working properly.

For example "CREATE PUBLICATION pub1 FOR ALL" only completes with
"TABLES" but not "SEQUENCES".
For example "CREATE PUBLICATION pub1 FOR ALL SEQ" doesn't complete
"SEQUENCES" properly

======
src/include/catalog/pg_publication.h

NITPICK - move the extern to be adjacent to others like it.

======
src/include/nodes/parsenodes.h

15.
- bool for_all_tables; /* Special publication for all tables in db */
+ List    *for_all_objects; /* Special publication for all objects in
+ * db */
} CreatePublicationStmt;

I felt this List logic is a bit strange. See my comment #10 in gram.y
for more details.

~~~

16.
- bool for_all_tables; /* Special publication for all tables in db */
+ List    *for_all_objects; /* Special publication for all objects in
+ * db */

Ditto comment #15 in AlterPublicationStmt

======
src/test/regress/sql/publication.sql

17.
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication
WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1

Should you also do "\d+ tespub_seq0" here? Otherwise what was the
point of defining the seq0 sequence being in this test?

~~~

18.
Maybe there are missing test cases for different syntax combinations like:

FOR ALL TABLES, SEQUENCES
FOR ALL SEQUENCES, TABLES

Note that the current list logic of this patch even considers my
following bogus statement syntax is OK.

test_pub=# CREATE PUBLICATION pub_silly FOR ALL TABLES, SEQUENCES,
TABLES, TABLES, TABLES, SEQUENCES;
CREATE PUBLICATION
test_pub=#

======
99.
Please also refer to the attached nitpicks patch which implements all
the cosmetic issues identified above as NITPICKS.

Thank you for your feedback. I have addressed all the comments in the
attached patch.

Regards,
Vignesh

Attachments:

v20240703-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240703-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 769979406c0483d287666250bf6c53aba4404da4 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240703 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Additionally, a new system view, pg_publication_sequences, has been
introduced to list all sequences added to a publication. Furthermore,
enhancements to psql commands (\d and \dRp) now allow for better display
of publications containing specific sequences or sequences included in a
publication.

Note: This patch currently supports only the "ALL SEQUENCES" command.
Handling of commands such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  30 +-
 doc/src/sgml/system-views.sgml            |  67 +++
 src/backend/catalog/pg_publication.c      |  86 +++-
 src/backend/catalog/system_views.sql      |  10 +
 src/backend/commands/publicationcmds.c    |  26 +-
 src/backend/parser/gram.y                 |  83 +++-
 src/bin/pg_dump/pg_dump.c                 |  30 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 193 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_proc.dat           |   5 +
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  22 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 479 ++++++++++++----------
 src/test/regress/expected/rules.out       |   8 +
 src/test/regress/sql/publication.sql      |  30 ++
 18 files changed, 815 insertions(+), 299 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..0c1b469215 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -117,6 +122,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizing changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-for-all-tables">
     <term><literal>FOR ALL TABLES</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index bdc34cf94e..b893fc2d90 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2147,6 +2152,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..998a840b67 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
@@ -1254,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..35c7ddaaa3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,12 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create FOR ALL %s publication",
+						stmt->for_all_tables ? "TABLES" : "SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +784,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +811,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1011,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1497,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1749,7 +1752,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, rels)
 	{
@@ -1828,7 +1831,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, schemas)
 	{
@@ -1919,6 +1922,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 							NameStr(form->pubname)),
 					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
 
+		if (form->puballsequences && !superuser_arg(newOwnerId))
+			ereport(ERROR,
+					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					 errmsg("permission denied to change owner of publication \"%s\"",
+							NameStr(form->pubname)),
+					 errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..382cf5c872 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,9 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_allpubobjtype_list(List *allbjects_list, bool *alltables,
+										  bool *allsequences,
+										  core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +275,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	AllPublicationObjSpec *allpublicationobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +459,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +594,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <allpublicationobjectspec> AllPublicationObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10560,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [,...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SCHEMAS
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10585,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
 					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_allpubobjtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10703,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+AllPublicationObjSpec:
+				TABLES
+					{
+						$$ = makeNode(AllPublicationObjSpec);
+						$$->pubobjtype = PUBLICATION_ALLTABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(AllPublicationObjSpec);
+						$$->pubobjtype = PUBLICATION_ALLSEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	AllPublicationObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' AllPublicationObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19438,49 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process allbjects_list to check if the options have been specified more than
+ * once and set alltables/allsequences.
+ */
+static void
+preprocess_allpubobjtype_list(List *allbjects_list, bool *alltables,
+							  bool *allsequences, core_yyscan_t yyscanner)
+{
+	bool		alltables_specified = false;
+	bool		allsequences_specified = false;
+
+	if (!allbjects_list)
+		return;
+
+	foreach_ptr(AllPublicationObjSpec, allpubob, allbjects_list)
+	{
+		if (allpubob->pubobjtype == PUBLICATION_ALLTABLES)
+		{
+			if (alltables_specified)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(allpubob->location));
+
+			alltables_specified = true;
+			*alltables = true;
+		}
+		else if (allpubob->pubobjtype == PUBLICATION_ALLSEQUENCES)
+		{
+			if (allsequences_specified)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(allpubob->location));
+
+			allsequences_specified = true;
+			*allsequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 7aec016a9f..332f3764e9 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4185,6 +4185,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4201,23 +4202,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 170000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4229,6 +4236,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4248,6 +4256,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4295,8 +4305,16 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
-		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	if (pubinfo->puballtables || pubinfo->puballsequences)
+	{
+		appendPQExpBufferStr(query, " FOR ALL");
+		if (pubinfo->puballtables &&  pubinfo->puballsequences)
+			appendPQExpBufferStr(query, " TABLES, SEQUENCES");
+		else if (pubinfo->puballtables)
+			appendPQExpBufferStr(query, " TABLES");
+		else
+			appendPQExpBufferStr(query, " SEQUENCES");
+	}
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..5054be0fd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..cdd6f11989 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,52 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* print any publications */
+		if (pset.sversion >= 170000)
+		{
+			int			tuples = 0;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+			else
+				tuples = PQntuples(result);
+
+			if (tuples > 0)
+				printTableAddFooter(&cont, _("Publications:"));
+
+			/* Might be an empty set - that's ok */
+			for (i = 0; i < tuples; i++)
+			{
+				printfPQExpBuffer(&buf, "    \"%s\"",
+								  PQgetvalue(result, i, 0));
+
+				printTableAddFooter(&cont, buf.data);
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2110,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6292,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6309,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 170000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6424,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6441,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 170000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6455,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6510,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6519,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6531,10 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
+
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index d453e224d9..da608d074b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f23947025b..c093f3d09d 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11945,6 +11945,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..1c50940b57 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,13 +4162,30 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * All Publication type
+ */
+typedef enum AllPublicationObjType
+{
+	PUBLICATION_ALLTABLES,		/* All tables */
+	PUBLICATION_ALLSEQUENCES,	/* All sequences */
+} AllPublicationObjType;
+
+typedef struct AllPublicationObjSpec
+{
+	NodeTag		type;
+	AllPublicationObjType	pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} AllPublicationObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_tables;		/* All tables */
+	bool		for_all_sequences;	/* All sequences */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
@@ -4191,7 +4208,8 @@ typedef struct AlterPublicationStmt
 	 * objects.
 	 */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 	AlterPublicationAction action;	/* What action to perform with the given
 									 * objects */
 } AlterPublicationStmt;
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..579c69375f 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,83 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- sequences publication
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+         pubname         | puballtables | puballsequences 
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f            | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+                       Sequence "pub_test.testpub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_forallsequences"
+
+\dRp+ testpub_forallsequences
+                                    Publication testpub_forallsequences
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+--- combination of all tables and all sequences publication
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_for_allsequences_alltables';
+              pubname               | puballtables | puballsequences 
+------------------------------------+--------------+-----------------
+ testpub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ testpub_for_allsequences_alltables
+                               Publication testpub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
+DROP PUBLICATION testpub_for_allsequences_alltables;
+-- fail - specifying tables more than once;
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - specifying sequences more than once;
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +302,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +320,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +352,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +368,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +387,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +398,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +434,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +565,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +782,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +969,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1177,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1218,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1299,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1312,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1341,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1367,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1438,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1449,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1470,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1482,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1494,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1505,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1516,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1527,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1558,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1570,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1652,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1673,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4c789279e5..bd5efd5d27 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ad0af9de5c 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,36 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- sequences publication
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+
+--- combination of all tables and all sequences publication
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_for_allsequences_alltables';
+\dRp+ testpub_for_allsequences_alltables
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
+DROP PUBLICATION testpub_for_allsequences_alltables;
+
+-- fail - specifying tables more than once;
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - specifying sequences more than once;
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
-- 
2.34.1

v20240703-0001-Introduce-pg_sequence_state-and-SetSequenc.patchtext/x-patch; charset=US-ASCII; name=v20240703-0001-Introduce-pg_sequence_state-and-SetSequenc.patchDownload
From 11ab3fb6ae00638fa4af919a13c8afe214a347cb Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240703 1/3] Introduce pg_sequence_state and
 SetSequenceLastValue functions for enhanced sequence management

This patch introduces couple of new function: pg_sequence_state function
allows retrieval of sequence values including LSN. The SetSequenceLastValue
funcion enables updating sequences with specified values.
---
 doc/src/sgml/func.sgml                 |  27 ++++
 src/backend/commands/sequence.c        | 169 +++++++++++++++++++++++--
 src/include/catalog/pg_proc.dat        |   8 ++
 src/include/commands/sequence.h        |   1 +
 src/test/regress/expected/sequence.out |  12 ++
 src/test/regress/sql/sequence.sql      |   2 +
 6 files changed, 211 insertions(+), 8 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index f1f22a1960..d30864e7ee 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19506,6 +19506,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ()
+        <returnvalue>record</returnvalue>
+        ( <parameter>lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>lsn</literal> is the
+        page lsn of the sequence, <literal>last_value</literal> is the most
+        recent value returned by <function>nextval</function> in the current
+        session, <literal>log_cnt</literal> shows how many fetches remain
+        before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 9f28d40466..d83dbd2174 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,83 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ *
+ * Note: This function resembles do_setval but does not include the locking and
+ * verification steps, as those are managed in a slightly different manner for
+ * logical replication.
+ */
+void
+SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* check the comment above nextval_internal()'s equivalent call. */
+	if (RelationNeedsWAL(seqrel))
+	{
+		GetTopTransactionId();
+
+		if (XLogLogicalInfoActive())
+			GetCurrentTransactionId();
+	}
+
+	/* ready to change the on-disk (or really, in-buffer) tuple */
+	START_CRIT_SECTION();
+
+	seq->last_value = new_last_value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	MarkBufferDirty(buf);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(seqrel))
+	{
+		xl_seq_rec      xlrec;
+		XLogRecPtr      recptr;
+		Page            page = BufferGetPage(buf);
+
+		XLogBeginInsert();
+		XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+		xlrec.locator = seqrel->rd_locator;
+
+		XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+		XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+		recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buf);
+
+	/*
+	 * Clear local cache so that we don't think we have cached numbers.
+	 * Note that we do not change the currval() state.
+	 */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +555,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +637,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +766,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +1062,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1262,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1287,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1807,7 +1894,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1822,6 +1909,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page lsn for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d4ac578ae6..f23947025b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..003f2e3413 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 2b47b7796b..cbcd65f499 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 674f5f1f66..5fcb36341d 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240703-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240703-0003-Enhance-sequence-synchronization-during-su.patchDownload
From de045d83fc1277d57a453fe137e439b11fd6743e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 19 Jun 2024 14:58:14 +0530
Subject: [PATCH v20240703 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
	ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  11 +
 src/backend/catalog/pg_subscription.c         |  64 ++++
 src/backend/commands/subscriptioncmds.c       | 264 +++++++++++++-
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |   9 +
 src/backend/postmaster/bgworker.c             |   3 +
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  50 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 325 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 146 +++++++-
 src/backend/replication/logical/worker.c      |  12 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_subscription.h         |   6 +
 src/include/catalog/pg_subscription_rel.h     |   1 +
 src/include/nodes/parsenodes.h                |   1 +
 src/include/replication/logicalworker.h       |   1 +
 src/include/replication/worker_internal.h     |  13 +
 src/test/subscription/t/034_sequences.pl      | 145 ++++++++
 src/tools/pgindent/typedefs.list              |   1 +
 23 files changed, 1041 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 17d84bd321..98be4899a1 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5200,8 +5200,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index ccdd24312b..af775e6fa9 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1984,8 +1984,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the table synchronization workers, sequence
+    synchronization worker and parallel apply workers.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 991f629907..62870aa41b 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 476f195622..fc8a33c0b5 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -27,6 +27,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SKIP ( <replaceable class="parameter">skip_option</replaceable> = <replaceable class="parameter">value</replaceable> )
@@ -194,6 +195,16 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequences information from publisher and re-synchronize the
+      sequence data from the publisher.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..7673f1384c 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -551,3 +552,66 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 	return res;
 }
+
+
+/*
+ * Get the sequences for the subscription.
+ *
+ * The returned list is palloc'ed in the current memory context.
+ */
+List *
+GetSubscriptionSequences(Oid subid, char state)
+{
+	List	   *res = NIL;
+	Relation	rel;
+	HeapTuple	tup;
+	int			nkeys = 0;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	MemoryContext oldctx;
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[nkeys++],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	if (state != '\0')
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(state));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, nkeys, skey);
+
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subseq;
+		SubscriptionRelState *seqinfo;
+		Datum		d;
+		bool		isnull;
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		subseq = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		seqinfo = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
+		seqinfo->relid = subseq->srrelid;
+		d = SysCacheGetAttr(SUBSCRIPTIONRELMAP, tup,
+							Anum_pg_subscription_rel_srsublsn, &isnull);
+		if (isnull)
+			seqinfo->lsn = InvalidXLogRecPtr;
+		else
+			seqinfo->lsn = DatumGetLSN(d);
+
+		res = lappend(res, seqinfo);
+		MemoryContextSwitchTo(oldctx);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	return res;
+}
\ No newline at end of file
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e407428dbc..32e19a739c 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -102,6 +102,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -759,6 +760,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List *sequences;
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -769,6 +771,22 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 */
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * Get the table list from publisher and build local table status
 			 * info.
@@ -898,6 +916,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		/* Get the table list from publisher. */
 		pubrel_names = fetch_table_list(wrconn, sub->publications);
 
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names, fetch_sequence_list(wrconn, sub->publications));
+
 		/* Get local table list. */
 		subrel_states = GetSubscriptionRelations(sub->oid, false);
 		subrel_count = list_length(subrel_states);
@@ -980,6 +1001,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1006,13 +1028,15 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				/* Stop the worker if relation kind is not sequence*/
+				if (relkind != RELKIND_SEQUENCE)
+					logicalrep_worker_stop(sub->oid, relid);
 
 				/*
 				 * For READY state, we would have already dropped the
 				 * tablesync origin.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)
 				{
 					char		originname[NAMEDATALEN];
 
@@ -1047,7 +1071,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		for (off = 0; off < remove_rel_len; off++)
 		{
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE &&
+				get_rel_relkind(sub_remove_rels[off].relid) != RELKIND_SEQUENCE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1077,6 +1102,142 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Refresh the sequences data of the subscription.
+ */
+static void
+AlterSubscription_refreshsequences(Subscription *sub)
+{
+	char	   *err;
+	List	   *pubseq_names = NIL;
+	List	   *subseq_states;
+	Oid		   *subseq_local_oids;
+	Oid		   *pubseq_local_oids;
+	int			off;
+	int			subrel_count;
+	Relation	rel = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	PG_TRY();
+	{
+		/* Get the sequences from the publisher. */
+		pubseq_names = fetch_sequence_list(wrconn, sub->publications);
+
+		/* Get local sequence list. */
+		subseq_states = GetSubscriptionSequences(sub->oid, '\0');
+		subrel_count = list_length(subseq_states);
+
+		/*
+		 * Build qsorted array of local sequence oids for faster lookup. This
+		 * can potentially contain all sequences in the database so speed of
+		 * lookup is important.
+		 */
+		subseq_local_oids = palloc(subrel_count * sizeof(Oid));
+		off = 0;
+		foreach_ptr(SubscriptionSeqInfo, seqinfo, subseq_states)
+			subseq_local_oids[off++] = seqinfo->seqid;
+
+		qsort(subseq_local_oids, subrel_count, sizeof(Oid), oid_cmp);
+
+		/*
+		 * Walk over the remote sequences and try to match them to locally
+		 * known sequences. If the sequence is not known locally create a new
+		 * state for it.
+		 *
+		 * Also builds array of local oids of remote sequences for the next
+		 * step.
+		 */
+		off = 0;
+		pubseq_local_oids = palloc(list_length(pubseq_names) * sizeof(Oid));
+
+		foreach_ptr(RangeVar, rv, pubseq_names)
+		{
+			Oid			relid;
+
+			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+			/* Check for supported relkind. */
+			CheckSubscriptionRelkind(get_rel_relkind(relid),
+									 rv->schemaname, rv->relname);
+
+			pubseq_local_oids[off++] = relid;
+
+			if (!bsearch(&relid, subseq_local_oids,
+						 subrel_count, sizeof(Oid), oid_cmp))
+			{
+				AddSubscriptionRelState(sub->oid, relid,
+										SUBREL_STATE_INIT,
+										InvalidXLogRecPtr, true);
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" added to subscription \"%s\"",
+										 rv->schemaname, rv->relname, sub->name)));
+			}
+		}
+
+		/*
+		 * Next remove state for sequences we should not care about anymore
+		 * using the data we collected above
+		 */
+		qsort(pubseq_local_oids, list_length(pubseq_names),
+			  sizeof(Oid), oid_cmp);
+
+		for (off = 0; off < subrel_count; off++)
+		{
+			Oid			relid = subseq_local_oids[off];
+
+			if (!bsearch(&relid, pubseq_local_oids,
+						 list_length(pubseq_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * This locking ensures that the state of rels won't change
+				 * till we are done with this refresh operation.
+				 */
+				if (!rel)
+					rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+				RemoveSubscriptionRel(sub->oid, relid);
+
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+			}
+			else
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
+			}
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+
+	if (rel)
+		table_close(rel, NoLock);
+}
+
 /*
  * Alter the existing subscription.
  */
@@ -1404,6 +1565,20 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refreshsequences(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_REFRESH:
 			{
 				if (!sub->enabled)
@@ -2060,11 +2235,17 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
+		char	   *schemaname;
+		char	   *tablename;
+
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			schemaname = get_namespace_name(get_rel_namespace(relid));
+			tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2234,6 +2415,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	ListCell   *lc;
+	bool		first;
+	List	   *tablelist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "  FROM pg_catalog.pg_publication_sequences s\n"
+						   " WHERE s.pubname IN (");
+	first = true;
+	foreach(lc, publications)
+	{
+		char	   *pubname = strVal(lfirst(lc));
+
+		if (first)
+			first = false;
+		else
+			appendStringInfoString(&cmd, ", ");
+
+		appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+	}
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of replicated sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		tablelist = lappend(tablelist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return tablelist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 382cf5c872..c9d07d9915 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10839,6 +10839,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..6770e26569 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -132,6 +132,9 @@ static const struct
 	},
 	{
 		"TablesyncWorkerMain", TablesyncWorkerMain
+	},
+	{
+		"SequencesyncWorkerMain", SequencesyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 27c3a91fb7..466771d775 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -267,6 +267,39 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 	return res;
 }
 
+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
+{
+	int			i;
+	LogicalRepWorker *res = NULL;
+
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	/* Search for attached worker for a given subscription id. */
+	for (i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		/* Skip parallel apply workers. */
+		if (!isSequencesyncWorker(w))
+			continue;
+
+		if (w->in_use && w->subid == subid && (only_running && w->proc))
+		{
+			res = w;
+			break;
+		}
+	}
+
+	return res;
+}
+
 /*
  * Similar to logicalrep_worker_find(), but returns a list of all workers for
  * the subscription, instead of just one.
@@ -311,6 +344,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -320,7 +354,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - parallel apply worker is the only kind of subworker
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
-	Assert(is_tablesync_worker == OidIsValid(relid));
+	Assert(is_tablesync_worker == OidIsValid(relid) || is_sequencesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
 
 	ereport(DEBUG1,
@@ -396,7 +430,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -491,6 +526,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequencesyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -1351,6 +1394,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..7f609fc0e9
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,325 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/lsyscache.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * Fetch sequence data (current state) from the remote node, including the
+ * page LSN.
+ */
+static int64
+fetch_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {INT8OID, LSNOID};
+	int64		value = (Datum) 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT (last_value + log_cnt), page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		bool		isnull;
+
+		value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		*lsn = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char			relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for table \"%s.%s\" from publisher: %s",
+						nspname, RelationGetRelationName(rel), res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	/*
+	 * Logical replication of sequences is based on decoding WAL records,
+	 * describing the "next" state of the sequence the current state in the
+	 * relfilenode is yet to reach. But during the initial sync we read the
+	 * current state, so we need to reconstruct the WAL record logged when we
+	 * started the current batch of sequence values.
+	 *
+	 * Otherwise we might get duplicate values (on subscriber) if we failed
+	 * over right after the sync.
+	 */
+	sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
+
+	/* sets the sequences in non-transactional way */
+	SetSequenceLastValue(RelationGetRelid(rel), sequence_value);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences()
+{
+	char	   *err;
+	bool		must_use_password;
+	List *sequences;
+	char	   slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner;
+	int 		curr_seq = 0;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	/* Get the sequences that should be synchronized. */
+	StartTransactionCommand();
+	sequences = GetSubscriptionSequences(subid,
+										 SUBREL_STATE_INIT);
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+
+	seq_count = list_length(sequences);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences)
+	{
+		Relation	sequencerel;
+		XLogRecPtr	sequence_lsn;
+		int			next_seq;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)
+			StartTransactionCommand();
+
+		sequencerel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless the
+		 * user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequencerel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequence sync worker has permission to insert into the
+		 * target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequencerel), GetUserId(),
+									ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						get_relkind_objtype(sequencerel->rd_rel->relkind),
+						RelationGetRelationName(sequencerel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or superuser,
+		 * who has it implicitly), but other roles should not be able to
+		 * circumvent RLS.  Disallow logical replication into RLS enabled
+		 * relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into relation with row-level security enabled: \"%s\"",
+							GetUserNameFromId(GetUserId(), true),
+							RelationGetRelationName(sequencerel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequencerel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequencerel, NoLock);
+
+		next_seq = curr_seq + 1;
+		if (((next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) || next_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			int i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);
+			for (; i <= curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			CommitTransactionCommand();
+		}
+
+		curr_seq++;
+	}
+
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if it's required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication Sequencesync worker entry point */
+void
+SequencesyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(false);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b00267f042..5541187353 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -139,9 +139,9 @@ static StringInfo copybuf = NULL;
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(bool istable)
 {
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
@@ -157,10 +157,15 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (istable)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequences synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
@@ -387,7 +392,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(true);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -463,6 +468,17 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	foreach(lc, table_states_not_ready)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind == RELKIND_SEQUENCE)
+			continue;
 
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
@@ -477,11 +493,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -660,6 +671,106 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence syncronization worker running already, no need to
+ * start a sequence synchronization in this case. The existing sequence
+ * sync worker will syncronize the sequences. If there are still any sequences
+ * to be synced after the sequence sync worker exited, then we new sequence
+ * sync worker can be started in the next iteration. To prevent starting the
+ * seqeuence sync worker at a high frequency after a failure, we store its last
+ * start time. We start the sync worker for the same relation after waiting
+ * at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription tables here. */
+	FetchTableStates(&started_tx);
+
+	/*
+	 * Start sequence sync worker if there is no sequence sync worker running.
+	 */
+	foreach_ptr(SubscriptionRelState, rstate, table_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
+															true);
+		/*
+		 * If there is a sequence sync worker, the sequence sync worker
+		 * will handle sync of this sequence.
+		 */
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int	nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence sync
+			 * worker to sync the sequences and break from the loop, as this
+			 * sequence sync worker will take care of synchronizing all the
+			 * sequences that are in init state.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											InvalidOid,
+											DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -682,9 +793,16 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			process_syncing_sequences_for_apply();
 			process_syncing_tables_for_apply(current_lsn);
 			break;
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1320,7 +1438,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(true);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1716,7 +1834,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(true);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 3b285894db..478bea7f69 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,12 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4631,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequences synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4646,7 +4656,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index da608d074b..5c9586a5b9 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 0aa14ec4a2..8c96f0ce72 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -159,6 +159,12 @@ typedef struct Subscription
 								 * specified origin */
 } Subscription;
 
+typedef struct SubscriptionSeqInfo
+{
+	Oid			seqid;
+	XLogRecPtr	lsn;
+} SubscriptionSeqInfo;
+
 /* Disallow streaming in-progress transactions. */
 #define LOGICALREP_STREAM_OFF 'f'
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..3cf7834f8d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -91,5 +91,6 @@ extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionSequences(Oid subid, char state);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1c50940b57..8107ba1cf5 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4231,6 +4231,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..f380c1ba60 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -19,6 +19,7 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
 extern void TablesyncWorkerMain(Datum main_arg);
+extern void SequencesyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 515aefd519..23b4267598 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -32,6 +32,7 @@ typedef enum LogicalRepWorkerType
 	WORKERTYPE_TABLESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
+	WORKERTYPE_SEQUENCESYNC,
 } LogicalRepWorkerType;
 
 typedef struct LogicalRepWorker
@@ -240,6 +241,8 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
+extern LogicalRepWorker *logicalrep_sequence_sync_worker_find(Oid subid,
+															  bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running);
 extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
@@ -252,6 +255,8 @@ extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(bool istable);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -329,6 +334,8 @@ extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
 #define isTablesyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequencesyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
@@ -336,6 +343,12 @@ am_tablesync_worker(void)
 	return isTablesyncWorker(MyLogicalRepWorker);
 }
 
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequencesyncWorker(MyLogicalRepWorker);
+}
+
 static inline bool
 am_leader_apply_worker(void)
 {
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..94bf83a14b
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,145 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+	CREATE SEQUENCE s2;
+	CREATE SEQUENCE s3;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION seq_pub FOR ALL SEQUENCES");
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
+);
+
+# Wait for initial sync to finish as well
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# create a new sequence, it should be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s2;
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# changes to existing sequences should not be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+# Refresh publication after create a new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# Changes of both new and existing sequence should be synced after REFRESH
+# PUBLICATION SEQUENCES.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s3;
+	INSERT INTO seq_test SELECT nextval('s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# Refresh publication sequences after create new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION SEQUENCES
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '231|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s3;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e6c1caf649..50e651ce8c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2767,6 +2767,7 @@ SubscriptingRefState
 Subscription
 SubscriptionInfo
 SubscriptionRelState
+SubscriptionSeqInfo
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
 SupportRequestIndexCondition
-- 
2.34.1

#63vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#61)
Re: Logical Replication of sequences

On Wed, 3 Jul 2024 at 08:24, Peter Smith <smithpb2250@gmail.com> wrote:

Here are my comments for patch v20240702-0001

They are all cosmetic and/or typos. Apart from these the 0001 patch LGTM.

======
doc/src/sgml/func.sgml

Section 9.17. Sequence Manipulation Functions

pg_sequence_state:
nitpick - typo /whethere/whether/
nitpick - reworded slightly using a ChatGPT suggestion. (YMMV, so it
is fine also if you prefer the current wording)

======
src/backend/commands/sequence.c

SetSequenceLastValue:
nitpick - typo in function comment /diffrent/different/

pg_sequence_state:
nitpick - function comment wording: /page LSN/the page LSN/
nitpick - moved some comment details about 'lsn_ret' into the function header
nitpick - rearranged variable assignments to have consistent order
with the values
nitpick - tweaked comments
nitpick - typo /whethere/whether/

======
99.
Please see the attached diffs patch which implements all those
nitpicks mentioned above.

Thank you for your feedback. I have addressed all the comments in the
v20240703 version patch attached at [1]/messages/by-id/CALDaNm0mSSrvHNRnC67f0HWMpoLW9UzxGVXimhwbRtKjE7Aa-Q@mail.gmail.com.
[1]: /messages/by-id/CALDaNm0mSSrvHNRnC67f0HWMpoLW9UzxGVXimhwbRtKjE7Aa-Q@mail.gmail.com

Regards,
Vignesh

#64Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#62)
1 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh. Here are my comments for the latest patch v20240703-0001.

======
doc/src/sgml/func.sgml

nitpick - /lsn/LSN/ (all other doc pages I found use uppercase for this acronym)

======
src/backend/commands/sequence.c

nitpick - /lsn/LSN/

======
Please see attached nitpicks diff.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240704_SEQ_0001.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240704_SEQ_0001.txtDownload
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index d30864e..fb3f973 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19521,7 +19521,7 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
        </para>
        <para>
         Returns information about the sequence. <literal>lsn</literal> is the
-        page lsn of the sequence, <literal>last_value</literal> is the most
+        page LSN of the sequence, <literal>last_value</literal> is the most
         recent value returned by <function>nextval</function> in the current
         session, <literal>log_cnt</literal> shows how many fetches remain
         before a new WAL record has to be written, and
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index d83dbd2..bff990a 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -1958,7 +1958,7 @@ pg_sequence_state(PG_FUNCTION_ARGS)
 	UnlockReleaseBuffer(buf);
 	relation_close(seqrel, NoLock);
 
-	/* Page lsn for the sequence */
+	/* Page LSN for the sequence */
 	values[0] = LSNGetDatum(lsn);
 
 	/* The value most recently returned by nextval in the current session */
#65vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#64)
3 attachment(s)
Re: Logical Replication of sequences

On Thu, 4 Jul 2024 at 06:40, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh. Here are my comments for the latest patch v20240703-0001.

======
doc/src/sgml/func.sgml

nitpick - /lsn/LSN/ (all other doc pages I found use uppercase for this acronym)

======
src/backend/commands/sequence.c

nitpick - /lsn/LSN/

Thanks for the comments, the attached patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20240704-0001-Introduce-pg_sequence_state-and-SetSequenc.patchtext/x-patch; charset=US-ASCII; name=v20240704-0001-Introduce-pg_sequence_state-and-SetSequenc.patchDownload
From c71e4c27fa548cf36770874ce7d5c0c9d795ae20 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240704 1/3] Introduce pg_sequence_state and
 SetSequenceLastValue functions for enhanced sequence management

This patch introduces a couple of new functions: pg_sequence_state function
allows retrieval of sequence values including LSN. The SetSequenceLastValue
funcion enables updating sequences with specified values.
---
 doc/src/sgml/func.sgml                 |  27 ++++
 src/backend/commands/sequence.c        | 169 +++++++++++++++++++++++--
 src/include/catalog/pg_proc.dat        |   8 ++
 src/include/commands/sequence.h        |   1 +
 src/test/regress/expected/sequence.out |  12 ++
 src/test/regress/sql/sequence.sql      |   2 +
 6 files changed, 211 insertions(+), 8 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index f1f22a1960..fb3f9732de 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19506,6 +19506,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ()
+        <returnvalue>record</returnvalue>
+        ( <parameter>lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>lsn</literal> is the
+        page LSN of the sequence, <literal>last_value</literal> is the most
+        recent value returned by <function>nextval</function> in the current
+        session, <literal>log_cnt</literal> shows how many fetches remain
+        before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 9f28d40466..bff990afa7 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,83 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ *
+ * Note: This function resembles do_setval but does not include the locking and
+ * verification steps, as those are managed in a slightly different manner for
+ * logical replication.
+ */
+void
+SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* check the comment above nextval_internal()'s equivalent call. */
+	if (RelationNeedsWAL(seqrel))
+	{
+		GetTopTransactionId();
+
+		if (XLogLogicalInfoActive())
+			GetCurrentTransactionId();
+	}
+
+	/* ready to change the on-disk (or really, in-buffer) tuple */
+	START_CRIT_SECTION();
+
+	seq->last_value = new_last_value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	MarkBufferDirty(buf);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(seqrel))
+	{
+		xl_seq_rec      xlrec;
+		XLogRecPtr      recptr;
+		Page            page = BufferGetPage(buf);
+
+		XLogBeginInsert();
+		XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+		xlrec.locator = seqrel->rd_locator;
+
+		XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+		XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+		recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buf);
+
+	/*
+	 * Clear local cache so that we don't think we have cached numbers.
+	 * Note that we do not change the currval() state.
+	 */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +555,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +637,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +766,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +1062,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1262,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1287,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1807,7 +1894,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1822,6 +1909,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d4ac578ae6..f23947025b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..003f2e3413 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 2b47b7796b..cbcd65f499 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 674f5f1f66..5fcb36341d 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240704-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240704-0003-Enhance-sequence-synchronization-during-su.patchDownload
From 9704dbac34ee77e69d940dffe13e037371a2b5bc Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 19 Jun 2024 14:58:14 +0530
Subject: [PATCH v20240704 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
	ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  11 +
 src/backend/catalog/pg_subscription.c         |  64 ++++
 src/backend/commands/subscriptioncmds.c       | 264 +++++++++++++-
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |   9 +
 src/backend/postmaster/bgworker.c             |   3 +
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  50 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 325 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 146 +++++++-
 src/backend/replication/logical/worker.c      |  12 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_subscription.h         |   6 +
 src/include/catalog/pg_subscription_rel.h     |   1 +
 src/include/nodes/parsenodes.h                |   1 +
 src/include/replication/logicalworker.h       |   1 +
 src/include/replication/worker_internal.h     |  13 +
 src/test/subscription/t/034_sequences.pl      | 145 ++++++++
 src/tools/pgindent/typedefs.list              |   1 +
 23 files changed, 1041 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 17d84bd321..98be4899a1 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5200,8 +5200,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index ccdd24312b..af775e6fa9 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1984,8 +1984,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the table synchronization workers, sequence
+    synchronization worker and parallel apply workers.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 991f629907..62870aa41b 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 476f195622..fc8a33c0b5 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -27,6 +27,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SKIP ( <replaceable class="parameter">skip_option</replaceable> = <replaceable class="parameter">value</replaceable> )
@@ -194,6 +195,16 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequences information from publisher and re-synchronize the
+      sequence data from the publisher.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..7673f1384c 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -551,3 +552,66 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 	return res;
 }
+
+
+/*
+ * Get the sequences for the subscription.
+ *
+ * The returned list is palloc'ed in the current memory context.
+ */
+List *
+GetSubscriptionSequences(Oid subid, char state)
+{
+	List	   *res = NIL;
+	Relation	rel;
+	HeapTuple	tup;
+	int			nkeys = 0;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	MemoryContext oldctx;
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[nkeys++],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	if (state != '\0')
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(state));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, nkeys, skey);
+
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subseq;
+		SubscriptionRelState *seqinfo;
+		Datum		d;
+		bool		isnull;
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		subseq = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		seqinfo = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
+		seqinfo->relid = subseq->srrelid;
+		d = SysCacheGetAttr(SUBSCRIPTIONRELMAP, tup,
+							Anum_pg_subscription_rel_srsublsn, &isnull);
+		if (isnull)
+			seqinfo->lsn = InvalidXLogRecPtr;
+		else
+			seqinfo->lsn = DatumGetLSN(d);
+
+		res = lappend(res, seqinfo);
+		MemoryContextSwitchTo(oldctx);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	return res;
+}
\ No newline at end of file
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e407428dbc..32e19a739c 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -102,6 +102,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -759,6 +760,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List *sequences;
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -769,6 +771,22 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 */
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * Get the table list from publisher and build local table status
 			 * info.
@@ -898,6 +916,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		/* Get the table list from publisher. */
 		pubrel_names = fetch_table_list(wrconn, sub->publications);
 
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names, fetch_sequence_list(wrconn, sub->publications));
+
 		/* Get local table list. */
 		subrel_states = GetSubscriptionRelations(sub->oid, false);
 		subrel_count = list_length(subrel_states);
@@ -980,6 +1001,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1006,13 +1028,15 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				/* Stop the worker if relation kind is not sequence*/
+				if (relkind != RELKIND_SEQUENCE)
+					logicalrep_worker_stop(sub->oid, relid);
 
 				/*
 				 * For READY state, we would have already dropped the
 				 * tablesync origin.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)
 				{
 					char		originname[NAMEDATALEN];
 
@@ -1047,7 +1071,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		for (off = 0; off < remove_rel_len; off++)
 		{
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE &&
+				get_rel_relkind(sub_remove_rels[off].relid) != RELKIND_SEQUENCE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1077,6 +1102,142 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Refresh the sequences data of the subscription.
+ */
+static void
+AlterSubscription_refreshsequences(Subscription *sub)
+{
+	char	   *err;
+	List	   *pubseq_names = NIL;
+	List	   *subseq_states;
+	Oid		   *subseq_local_oids;
+	Oid		   *pubseq_local_oids;
+	int			off;
+	int			subrel_count;
+	Relation	rel = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	PG_TRY();
+	{
+		/* Get the sequences from the publisher. */
+		pubseq_names = fetch_sequence_list(wrconn, sub->publications);
+
+		/* Get local sequence list. */
+		subseq_states = GetSubscriptionSequences(sub->oid, '\0');
+		subrel_count = list_length(subseq_states);
+
+		/*
+		 * Build qsorted array of local sequence oids for faster lookup. This
+		 * can potentially contain all sequences in the database so speed of
+		 * lookup is important.
+		 */
+		subseq_local_oids = palloc(subrel_count * sizeof(Oid));
+		off = 0;
+		foreach_ptr(SubscriptionSeqInfo, seqinfo, subseq_states)
+			subseq_local_oids[off++] = seqinfo->seqid;
+
+		qsort(subseq_local_oids, subrel_count, sizeof(Oid), oid_cmp);
+
+		/*
+		 * Walk over the remote sequences and try to match them to locally
+		 * known sequences. If the sequence is not known locally create a new
+		 * state for it.
+		 *
+		 * Also builds array of local oids of remote sequences for the next
+		 * step.
+		 */
+		off = 0;
+		pubseq_local_oids = palloc(list_length(pubseq_names) * sizeof(Oid));
+
+		foreach_ptr(RangeVar, rv, pubseq_names)
+		{
+			Oid			relid;
+
+			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+			/* Check for supported relkind. */
+			CheckSubscriptionRelkind(get_rel_relkind(relid),
+									 rv->schemaname, rv->relname);
+
+			pubseq_local_oids[off++] = relid;
+
+			if (!bsearch(&relid, subseq_local_oids,
+						 subrel_count, sizeof(Oid), oid_cmp))
+			{
+				AddSubscriptionRelState(sub->oid, relid,
+										SUBREL_STATE_INIT,
+										InvalidXLogRecPtr, true);
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" added to subscription \"%s\"",
+										 rv->schemaname, rv->relname, sub->name)));
+			}
+		}
+
+		/*
+		 * Next remove state for sequences we should not care about anymore
+		 * using the data we collected above
+		 */
+		qsort(pubseq_local_oids, list_length(pubseq_names),
+			  sizeof(Oid), oid_cmp);
+
+		for (off = 0; off < subrel_count; off++)
+		{
+			Oid			relid = subseq_local_oids[off];
+
+			if (!bsearch(&relid, pubseq_local_oids,
+						 list_length(pubseq_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * This locking ensures that the state of rels won't change
+				 * till we are done with this refresh operation.
+				 */
+				if (!rel)
+					rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+				RemoveSubscriptionRel(sub->oid, relid);
+
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+			}
+			else
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
+			}
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+
+	if (rel)
+		table_close(rel, NoLock);
+}
+
 /*
  * Alter the existing subscription.
  */
@@ -1404,6 +1565,20 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refreshsequences(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_REFRESH:
 			{
 				if (!sub->enabled)
@@ -2060,11 +2235,17 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
+		char	   *schemaname;
+		char	   *tablename;
+
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			schemaname = get_namespace_name(get_rel_namespace(relid));
+			tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2234,6 +2415,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	ListCell   *lc;
+	bool		first;
+	List	   *tablelist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "  FROM pg_catalog.pg_publication_sequences s\n"
+						   " WHERE s.pubname IN (");
+	first = true;
+	foreach(lc, publications)
+	{
+		char	   *pubname = strVal(lfirst(lc));
+
+		if (first)
+			first = false;
+		else
+			appendStringInfoString(&cmd, ", ");
+
+		appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+	}
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of replicated sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		tablelist = lappend(tablelist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return tablelist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 382cf5c872..c9d07d9915 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10839,6 +10839,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..6770e26569 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -132,6 +132,9 @@ static const struct
 	},
 	{
 		"TablesyncWorkerMain", TablesyncWorkerMain
+	},
+	{
+		"SequencesyncWorkerMain", SequencesyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 27c3a91fb7..466771d775 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -267,6 +267,39 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 	return res;
 }
 
+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
+{
+	int			i;
+	LogicalRepWorker *res = NULL;
+
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	/* Search for attached worker for a given subscription id. */
+	for (i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		/* Skip parallel apply workers. */
+		if (!isSequencesyncWorker(w))
+			continue;
+
+		if (w->in_use && w->subid == subid && (only_running && w->proc))
+		{
+			res = w;
+			break;
+		}
+	}
+
+	return res;
+}
+
 /*
  * Similar to logicalrep_worker_find(), but returns a list of all workers for
  * the subscription, instead of just one.
@@ -311,6 +344,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -320,7 +354,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - parallel apply worker is the only kind of subworker
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
-	Assert(is_tablesync_worker == OidIsValid(relid));
+	Assert(is_tablesync_worker == OidIsValid(relid) || is_sequencesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
 
 	ereport(DEBUG1,
@@ -396,7 +430,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -491,6 +526,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequencesyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -1351,6 +1394,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..7f609fc0e9
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,325 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/lsyscache.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * Fetch sequence data (current state) from the remote node, including the
+ * page LSN.
+ */
+static int64
+fetch_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {INT8OID, LSNOID};
+	int64		value = (Datum) 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT (last_value + log_cnt), page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		bool		isnull;
+
+		value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		*lsn = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char			relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for table \"%s.%s\" from publisher: %s",
+						nspname, RelationGetRelationName(rel), res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	/*
+	 * Logical replication of sequences is based on decoding WAL records,
+	 * describing the "next" state of the sequence the current state in the
+	 * relfilenode is yet to reach. But during the initial sync we read the
+	 * current state, so we need to reconstruct the WAL record logged when we
+	 * started the current batch of sequence values.
+	 *
+	 * Otherwise we might get duplicate values (on subscriber) if we failed
+	 * over right after the sync.
+	 */
+	sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
+
+	/* sets the sequences in non-transactional way */
+	SetSequenceLastValue(RelationGetRelid(rel), sequence_value);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences()
+{
+	char	   *err;
+	bool		must_use_password;
+	List *sequences;
+	char	   slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner;
+	int 		curr_seq = 0;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	/* Get the sequences that should be synchronized. */
+	StartTransactionCommand();
+	sequences = GetSubscriptionSequences(subid,
+										 SUBREL_STATE_INIT);
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+
+	seq_count = list_length(sequences);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences)
+	{
+		Relation	sequencerel;
+		XLogRecPtr	sequence_lsn;
+		int			next_seq;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)
+			StartTransactionCommand();
+
+		sequencerel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless the
+		 * user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequencerel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequence sync worker has permission to insert into the
+		 * target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequencerel), GetUserId(),
+									ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						get_relkind_objtype(sequencerel->rd_rel->relkind),
+						RelationGetRelationName(sequencerel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or superuser,
+		 * who has it implicitly), but other roles should not be able to
+		 * circumvent RLS.  Disallow logical replication into RLS enabled
+		 * relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into relation with row-level security enabled: \"%s\"",
+							GetUserNameFromId(GetUserId(), true),
+							RelationGetRelationName(sequencerel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequencerel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequencerel, NoLock);
+
+		next_seq = curr_seq + 1;
+		if (((next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) || next_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			int i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);
+			for (; i <= curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			CommitTransactionCommand();
+		}
+
+		curr_seq++;
+	}
+
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if it's required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication Sequencesync worker entry point */
+void
+SequencesyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(false);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b00267f042..5541187353 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -139,9 +139,9 @@ static StringInfo copybuf = NULL;
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(bool istable)
 {
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
@@ -157,10 +157,15 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (istable)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequences synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
@@ -387,7 +392,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(true);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -463,6 +468,17 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	foreach(lc, table_states_not_ready)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind == RELKIND_SEQUENCE)
+			continue;
 
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
@@ -477,11 +493,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -660,6 +671,106 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence syncronization worker running already, no need to
+ * start a sequence synchronization in this case. The existing sequence
+ * sync worker will syncronize the sequences. If there are still any sequences
+ * to be synced after the sequence sync worker exited, then we new sequence
+ * sync worker can be started in the next iteration. To prevent starting the
+ * seqeuence sync worker at a high frequency after a failure, we store its last
+ * start time. We start the sync worker for the same relation after waiting
+ * at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription tables here. */
+	FetchTableStates(&started_tx);
+
+	/*
+	 * Start sequence sync worker if there is no sequence sync worker running.
+	 */
+	foreach_ptr(SubscriptionRelState, rstate, table_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
+															true);
+		/*
+		 * If there is a sequence sync worker, the sequence sync worker
+		 * will handle sync of this sequence.
+		 */
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int	nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence sync
+			 * worker to sync the sequences and break from the loop, as this
+			 * sequence sync worker will take care of synchronizing all the
+			 * sequences that are in init state.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											InvalidOid,
+											DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -682,9 +793,16 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			process_syncing_sequences_for_apply();
 			process_syncing_tables_for_apply(current_lsn);
 			break;
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1320,7 +1438,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(true);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1716,7 +1834,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(true);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 3b285894db..478bea7f69 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,12 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4631,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequences synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4646,7 +4656,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index da608d074b..5c9586a5b9 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 0aa14ec4a2..8c96f0ce72 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -159,6 +159,12 @@ typedef struct Subscription
 								 * specified origin */
 } Subscription;
 
+typedef struct SubscriptionSeqInfo
+{
+	Oid			seqid;
+	XLogRecPtr	lsn;
+} SubscriptionSeqInfo;
+
 /* Disallow streaming in-progress transactions. */
 #define LOGICALREP_STREAM_OFF 'f'
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..3cf7834f8d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -91,5 +91,6 @@ extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionSequences(Oid subid, char state);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1c50940b57..8107ba1cf5 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4231,6 +4231,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..f380c1ba60 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -19,6 +19,7 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
 extern void TablesyncWorkerMain(Datum main_arg);
+extern void SequencesyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 515aefd519..23b4267598 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -32,6 +32,7 @@ typedef enum LogicalRepWorkerType
 	WORKERTYPE_TABLESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
+	WORKERTYPE_SEQUENCESYNC,
 } LogicalRepWorkerType;
 
 typedef struct LogicalRepWorker
@@ -240,6 +241,8 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
+extern LogicalRepWorker *logicalrep_sequence_sync_worker_find(Oid subid,
+															  bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running);
 extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
@@ -252,6 +255,8 @@ extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(bool istable);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -329,6 +334,8 @@ extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
 #define isTablesyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequencesyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
@@ -336,6 +343,12 @@ am_tablesync_worker(void)
 	return isTablesyncWorker(MyLogicalRepWorker);
 }
 
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequencesyncWorker(MyLogicalRepWorker);
+}
+
 static inline bool
 am_leader_apply_worker(void)
 {
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..94bf83a14b
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,145 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+	CREATE SEQUENCE s2;
+	CREATE SEQUENCE s3;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION seq_pub FOR ALL SEQUENCES");
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
+);
+
+# Wait for initial sync to finish as well
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# create a new sequence, it should be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s2;
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# changes to existing sequences should not be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+# Refresh publication after create a new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# Changes of both new and existing sequence should be synced after REFRESH
+# PUBLICATION SEQUENCES.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s3;
+	INSERT INTO seq_test SELECT nextval('s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# Refresh publication sequences after create new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION SEQUENCES
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '231|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s3;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e710fa48e5..5147519047 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2768,6 +2768,7 @@ SubscriptingRefState
 Subscription
 SubscriptionInfo
 SubscriptionRelState
+SubscriptionSeqInfo
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
 SupportRequestIndexCondition
-- 
2.34.1

v20240704-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240704-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 172a0e87da4b57fb253193590bd8bdf4a21294b3 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240704 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Additionally, a new system view, pg_publication_sequences, has been
introduced to list all sequences added to a publication. Furthermore,
enhancements to psql commands (\d and \dRp) now allow for better display
of publications containing specific sequences or sequences included in a
publication.

Note: This patch currently supports only the "ALL SEQUENCES" command.
Handling of commands such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  30 +-
 doc/src/sgml/system-views.sgml            |  67 +++
 src/backend/catalog/pg_publication.c      |  86 +++-
 src/backend/catalog/system_views.sql      |  10 +
 src/backend/commands/publicationcmds.c    |  26 +-
 src/backend/parser/gram.y                 |  83 +++-
 src/bin/pg_dump/pg_dump.c                 |  30 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 193 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_proc.dat           |   5 +
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  22 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 479 ++++++++++++----------
 src/test/regress/expected/rules.out       |   8 +
 src/test/regress/sql/publication.sql      |  30 ++
 18 files changed, 815 insertions(+), 299 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..0c1b469215 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -117,6 +122,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizing changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-for-all-tables">
     <term><literal>FOR ALL TABLES</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index bdc34cf94e..b893fc2d90 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2147,6 +2152,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..998a840b67 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
@@ -1254,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..35c7ddaaa3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,12 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create FOR ALL %s publication",
+						stmt->for_all_tables ? "TABLES" : "SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +784,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +811,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1011,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1497,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1749,7 +1752,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, rels)
 	{
@@ -1828,7 +1831,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 {
 	ListCell   *lc;
 
-	Assert(!stmt || !stmt->for_all_tables);
+	Assert(!stmt || !stmt->for_all_objects);
 
 	foreach(lc, schemas)
 	{
@@ -1919,6 +1922,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 							NameStr(form->pubname)),
 					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
 
+		if (form->puballsequences && !superuser_arg(newOwnerId))
+			ereport(ERROR,
+					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					 errmsg("permission denied to change owner of publication \"%s\"",
+							NameStr(form->pubname)),
+					 errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..382cf5c872 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,9 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_allpubobjtype_list(List *allbjects_list, bool *alltables,
+										  bool *allsequences,
+										  core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +275,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	AllPublicationObjSpec *allpublicationobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +459,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +594,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <allpublicationobjectspec> AllPublicationObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10560,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [,...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SCHEMAS
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10585,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
 					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_allpubobjtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10703,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+AllPublicationObjSpec:
+				TABLES
+					{
+						$$ = makeNode(AllPublicationObjSpec);
+						$$->pubobjtype = PUBLICATION_ALLTABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(AllPublicationObjSpec);
+						$$->pubobjtype = PUBLICATION_ALLSEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	AllPublicationObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' AllPublicationObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19438,49 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process allbjects_list to check if the options have been specified more than
+ * once and set alltables/allsequences.
+ */
+static void
+preprocess_allpubobjtype_list(List *allbjects_list, bool *alltables,
+							  bool *allsequences, core_yyscan_t yyscanner)
+{
+	bool		alltables_specified = false;
+	bool		allsequences_specified = false;
+
+	if (!allbjects_list)
+		return;
+
+	foreach_ptr(AllPublicationObjSpec, allpubob, allbjects_list)
+	{
+		if (allpubob->pubobjtype == PUBLICATION_ALLTABLES)
+		{
+			if (alltables_specified)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(allpubob->location));
+
+			alltables_specified = true;
+			*alltables = true;
+		}
+		else if (allpubob->pubobjtype == PUBLICATION_ALLSEQUENCES)
+		{
+			if (allsequences_specified)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(allpubob->location));
+
+			allsequences_specified = true;
+			*allsequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 5426f1177c..0f55a1b78e 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4206,6 +4206,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4222,23 +4223,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 170000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4250,6 +4257,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4269,6 +4277,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4316,8 +4326,16 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
-		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	if (pubinfo->puballtables || pubinfo->puballsequences)
+	{
+		appendPQExpBufferStr(query, " FOR ALL");
+		if (pubinfo->puballtables &&  pubinfo->puballsequences)
+			appendPQExpBufferStr(query, " TABLES, SEQUENCES");
+		else if (pubinfo->puballtables)
+			appendPQExpBufferStr(query, " TABLES");
+		else
+			appendPQExpBufferStr(query, " SEQUENCES");
+	}
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..5054be0fd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..cdd6f11989 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,52 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* print any publications */
+		if (pset.sversion >= 170000)
+		{
+			int			tuples = 0;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+			else
+				tuples = PQntuples(result);
+
+			if (tuples > 0)
+				printTableAddFooter(&cont, _("Publications:"));
+
+			/* Might be an empty set - that's ok */
+			for (i = 0; i < tuples; i++)
+			{
+				printfPQExpBuffer(&buf, "    \"%s\"",
+								  PQgetvalue(result, i, 0));
+
+				printTableAddFooter(&cont, buf.data);
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2110,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6292,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6309,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 170000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6424,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6441,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 170000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6455,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6510,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6519,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6531,10 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
+
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index d453e224d9..da608d074b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f23947025b..c093f3d09d 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11945,6 +11945,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..1c50940b57 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,13 +4162,30 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * All Publication type
+ */
+typedef enum AllPublicationObjType
+{
+	PUBLICATION_ALLTABLES,		/* All tables */
+	PUBLICATION_ALLSEQUENCES,	/* All sequences */
+} AllPublicationObjType;
+
+typedef struct AllPublicationObjSpec
+{
+	NodeTag		type;
+	AllPublicationObjType	pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} AllPublicationObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_tables;		/* All tables */
+	bool		for_all_sequences;	/* All sequences */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
@@ -4191,7 +4208,8 @@ typedef struct AlterPublicationStmt
 	 * objects.
 	 */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	List	   *for_all_objects;	/* Special publication for all objects in
+									 * db */
 	AlterPublicationAction action;	/* What action to perform with the given
 									 * objects */
 } AlterPublicationStmt;
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..579c69375f 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,83 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- sequences publication
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+         pubname         | puballtables | puballsequences 
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f            | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+                       Sequence "pub_test.testpub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "testpub_forallsequences"
+
+\dRp+ testpub_forallsequences
+                                    Publication testpub_forallsequences
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+--- combination of all tables and all sequences publication
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_for_allsequences_alltables';
+              pubname               | puballtables | puballsequences 
+------------------------------------+--------------+-----------------
+ testpub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ testpub_for_allsequences_alltables
+                               Publication testpub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
+DROP PUBLICATION testpub_for_allsequences_alltables;
+-- fail - specifying tables more than once;
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - specifying sequences more than once;
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +302,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +320,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +352,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +368,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +387,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +398,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +434,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +565,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +782,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +969,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1177,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1218,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1299,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1312,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1341,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1367,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1438,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1449,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1470,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1482,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1494,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1505,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1516,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1527,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1558,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1570,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1652,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1673,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4c789279e5..bd5efd5d27 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ad0af9de5c 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,36 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- sequences publication
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+
+--- combination of all tables and all sequences publication
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_for_allsequences_alltables';
+\dRp+ testpub_for_allsequences_alltables
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
+DROP PUBLICATION testpub_forallsequences;
+DROP PUBLICATION testpub_for_allsequences_alltables;
+
+-- fail - specifying tables more than once;
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - specifying sequences more than once;
+CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
-- 
2.34.1

#66Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#62)
1 attachment(s)
Re: Logical Replication of sequences

Here are my review comments for the patch v20240703-0002

======
doc/src/sgml/ref/create_publication.sgml

nitpick - consider putting the "FOR ALL SEQUENCES" para last, because
eventually when more sequence syntax is added IMO it will be better to
describe all the TABLES together, and then describe all the SEQUENCES
together.

nitpick - /synchronizing changes/synchronizes changes/

Question: Was there a reason you chose wording "synchronizes changes"
instead of having same "replicates changes" wording of FOR ALL TABLES?

======
src/backend/catalog/system_views.sql

1.
Should there be some new test for the view? Otherwise, AFAICT this
patch has no tests that will exercise the new function
pg_get_publication_sequences.

======
src/backend/commands/publicationcmds.c

2.
+ errmsg("must be superuser to create FOR ALL %s publication",
+ stmt->for_all_tables ? "TABLES" : "SEQUENCES")));

nitpick - the combined error message may be fine, but I think
translators will prefer the substitution to be the full "FOR ALL
TABLES" and "FOR ALL SEQUENCES" instead of just the keywords that are
different.

======
src/backend/parser/gram.y

3.
Some of these new things maybe could be named better?

'preprocess_allpubobjtype_list' => 'preprocess_pub_all_objtype_list'

'AllPublicationObjSpec *allpublicationobjectspec;' =>
'PublicationAllObjSpec *publicationallobjectspec;'

(I didn't include these in nitpicks diffs because you probably have
better ideas than I do for good names)

~~~

nitpick - typo in comment /SCHEMAS/SEQUENCES/

preprocess_allpubobjtype_list:
nitpick - typo /allbjects_list/all_objects_list/
nitpick - simplify /allpubob/obj/
nitpick - add underscores in the enums

======
src/bin/pg_dump/pg_dump.c

4.
+ if (pubinfo->puballtables || pubinfo->puballsequences)
+ {
+ appendPQExpBufferStr(query, " FOR ALL");
+ if (pubinfo->puballtables &&  pubinfo->puballsequences)
+ appendPQExpBufferStr(query, " TABLES, SEQUENCES");
+ else if (pubinfo->puballtables)
+ appendPQExpBufferStr(query, " TABLES");
+ else
+ appendPQExpBufferStr(query, " SEQUENCES");
+ }

nitpick - it seems over-complicated; See nitpicks diff for my suggestion.

======
src/include/nodes/parsenodes.h

nitpick - put underscores in the enum values

~~

5.
- bool for_all_tables; /* Special publication for all tables in db */
+ List    *for_all_objects; /* Special publication for all objects in
+ * db */

Is this OK? Saying "for all objects" seemed misleading.

======
src/test/regress/sql/publication.sql

nitpick - some small changes to comments, e.g. writing keywords in uppercase

~~~

6.
I asked this before in a previous review [1-#17] -- I didn't
understand the point of the sequence 'testpub_seq0' since nobody seems
to be doing anything with it. Should it just be removed? Or is there a
missing test case to use it?

~~~

7.
Other things to consider:

(I didn't include these in my attached diff)

* could use a single CREATE SEQUENCE stmt instead of multiple

* could use a single DROP PUBLICATION stmt instead of multiple

* shouldn't all publication names ideally have a 'regress_' prefix?

======
99.
Please refer to the attached nitpicks diff which has implementation
for the nitpicks cited above.

======
[1]: /messages/by-id/CAHut+Pvrk75vSDkaXJVmhhZuuqQSY98btWJV=BMZAnyTtKRB4g@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240704-SEQ-0002.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240704-SEQ-0002.txtDownload
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 0c1b469..9891fef 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -122,16 +122,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-sequences">
-    <term><literal>FOR ALL SEQUENCES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that synchronizing changes for all sequences
-      in the database, including sequences created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-all-tables">
     <term><literal>FOR ALL TABLES</literal></term>
     <listitem>
@@ -173,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 35c7dda..a2ccbf3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -751,8 +751,9 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL %s publication",
-						stmt->for_all_tables ? "TABLES" : "SEQUENCES")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 382cf5c..760d700 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10565,7 +10565,7 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  * pub_obj_type is one of:
  *
  *		TABLES
- *		SCHEMAS
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10707,13 +10707,13 @@ AllPublicationObjSpec:
 				TABLES
 					{
 						$$ = makeNode(AllPublicationObjSpec);
-						$$->pubobjtype = PUBLICATION_ALLTABLES;
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
 						$$->location = @1;
 					}
 				| SEQUENCES
 					{
 						$$ = makeNode(AllPublicationObjSpec);
-						$$->pubobjtype = PUBLICATION_ALLSEQUENCES;
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
 						$$->location = @1;
 					}
 					;
@@ -19439,41 +19439,41 @@ parsePartitionStrategy(char *strategy)
 }
 
 /*
- * Process allbjects_list to check if the options have been specified more than
+ * Process all_objects_list to check if the options have been specified more than
  * once and set alltables/allsequences.
  */
 static void
-preprocess_allpubobjtype_list(List *allbjects_list, bool *alltables,
+preprocess_allpubobjtype_list(List *all_objects_list, bool *alltables,
 							  bool *allsequences, core_yyscan_t yyscanner)
 {
 	bool		alltables_specified = false;
 	bool		allsequences_specified = false;
 
-	if (!allbjects_list)
+	if (!all_objects_list)
 		return;
 
-	foreach_ptr(AllPublicationObjSpec, allpubob, allbjects_list)
+	foreach_ptr(AllPublicationObjSpec, obj, all_objects_list)
 	{
-		if (allpubob->pubobjtype == PUBLICATION_ALLTABLES)
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
 		{
 			if (alltables_specified)
 				ereport(ERROR,
 						errcode(ERRCODE_SYNTAX_ERROR),
 						errmsg("invalid publication object list"),
 						errdetail("TABLES can be specified only once."),
-						parser_errposition(allpubob->location));
+						parser_errposition(obj->location));
 
 			alltables_specified = true;
 			*alltables = true;
 		}
-		else if (allpubob->pubobjtype == PUBLICATION_ALLSEQUENCES)
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
 		{
 			if (allsequences_specified)
 				ereport(ERROR,
 						errcode(ERRCODE_SYNTAX_ERROR),
 						errmsg("invalid publication object list"),
 						errdetail("SEQUENCES can be specified only once."),
-						parser_errposition(allpubob->location));
+						parser_errposition(obj->location));
 
 			allsequences_specified = true;
 			*allsequences = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 0f55a1b..da713c2 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4326,16 +4326,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables || pubinfo->puballsequences)
-	{
-		appendPQExpBufferStr(query, " FOR ALL");
-		if (pubinfo->puballtables &&  pubinfo->puballsequences)
-			appendPQExpBufferStr(query, " TABLES, SEQUENCES");
-		else if (pubinfo->puballtables)
-			appendPQExpBufferStr(query, " TABLES");
-		else
-			appendPQExpBufferStr(query, " SEQUENCES");
-	}
+	if (pubinfo->puballtables &&  pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
+		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1c50940..50efbe5 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4167,8 +4167,8 @@ typedef struct PublicationObjSpec
  */
 typedef enum AllPublicationObjType
 {
-	PUBLICATION_ALLTABLES,		/* All tables */
-	PUBLICATION_ALLSEQUENCES,	/* All sequences */
+	PUBLICATION_ALL_TABLES,		/* All tables */
+	PUBLICATION_ALL_SEQUENCES,	/* All sequences */
 } AllPublicationObjType;
 
 typedef struct AllPublicationObjSpec
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 579c693..d0200d5 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -228,9 +228,10 @@ Tables:
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
---- sequences publication
+--- Tests for publications with SEQUENCES
 CREATE SEQUENCE testpub_seq0;
 CREATE SEQUENCE pub_test.testpub_seq1;
+-- FOR ALL SEQUENCES
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
 RESET client_min_messages;
@@ -255,7 +256,7 @@ Publications:
  regress_publication_user | f          | t             | t       | t       | t       | t         | f
 (1 row)
 
---- combination of all tables and all sequences publication
+--- FOR ALL specifying both TABLES and SEQUENCES
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
 RESET client_min_messages;
@@ -275,13 +276,13 @@ SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname
 DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
 DROP PUBLICATION testpub_forallsequences;
 DROP PUBLICATION testpub_for_allsequences_alltables;
--- fail - specifying tables more than once;
+-- fail - FOR ALL specifying TABLES more than once
 CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
 ERROR:  invalid publication object list
 LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
                                                                 ^
 DETAIL:  TABLES can be specified only once.
--- fail - specifying sequences more than once;
+-- fail - FOR ALL specifying SEQUENCES more than once
 CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
 ERROR:  invalid publication object list
 LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index ad0af9d..c28d554 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,10 +117,11 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
---- sequences publication
+--- Tests for publications with SEQUENCES
 CREATE SEQUENCE testpub_seq0;
 CREATE SEQUENCE pub_test.testpub_seq1;
 
+-- FOR ALL SEQUENCES
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES;
 RESET client_min_messages;
@@ -129,7 +130,7 @@ SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname
 \d+ pub_test.testpub_seq1
 \dRp+ testpub_forallsequences
 
---- combination of all tables and all sequences publication
+--- FOR ALL specifying both TABLES and SEQUENCES
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
 RESET client_min_messages;
@@ -141,10 +142,10 @@ DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1;
 DROP PUBLICATION testpub_forallsequences;
 DROP PUBLICATION testpub_for_allsequences_alltables;
 
--- fail - specifying tables more than once;
+-- fail - FOR ALL specifying TABLES more than once
 CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
 
--- fail - specifying sequences more than once;
+-- fail - FOR ALL specifying SEQUENCES more than once
 CREATE PUBLICATION testpub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
 
 -- Tests for partitioned tables
#67Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#65)
Re: Logical Replication of sequences

The latest (v20240704) patch 0001 LGTM

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#68Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#65)
Re: Logical Replication of sequences

Hi Vignesh.

After applying the v20240703-0003 patch, I was always getting errors
when running the subscription TAP tests.

# +++ tap check in src/test/subscription +++
t/001_rep_changes.pl ............... ok
t/002_types.pl ..................... ok
t/003_constraints.pl ............... ok
t/004_sync.pl ...................... ok
t/005_encoding.pl .................. ok
t/006_rewrite.pl ................... ok
t/007_ddl.pl ....................... 3/?
# Failed test 'Alter subscription set publication throws warning for
non-existent publication'
# at t/007_ddl.pl line 67.
Bailout called. Further testing stopped: pg_ctl stop failed
# Tests were run but no plan was declared and done_testing() was not seen.
FAILED--Further testing stopped: pg_ctl stop failed
make: *** [check] Error 255

~~~

The publisher log shows an Assert TRAP occurred:

2024-07-04 18:15:40.089 AEST [745] mysub1 LOG: statement: SELECT
DISTINCT s.schemaname, s.sequencename
FROM pg_catalog.pg_publication_sequences s
WHERE s.pubname IN ('mypub', 'non_existent_pub', 'non_existent_pub1',
'non_existent_pub2')
TRAP: failed Assert("IsA(list, OidList)"), File:
"../../../src/include/nodes/pg_list.h", Line: 323, PID: 745

~~~

A debugging backtrace looks like below:

Core was generated by `postgres: publisher: walsender postgres
postgres [local] SELECT '.
Program terminated with signal 6, Aborted.
#0 0x00007f36f44f02c7 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-260.el7_6.6.x86_64 pcre-8.32-17.el7.x86_64
(gdb) bt
#0 0x00007f36f44f02c7 in raise () from /lib64/libc.so.6
#1 0x00007f36f44f19b8 in abort () from /lib64/libc.so.6
#2 0x0000000000bb8be1 in ExceptionalCondition (conditionName=0xc7aa6c
"IsA(list, OidList)",
fileName=0xc7aa10 "../../../src/include/nodes/pg_list.h",
lineNumber=323) at assert.c:66
#3 0x00000000005f2c57 in list_nth_oid (list=0x27948f0, n=0) at
../../../src/include/nodes/pg_list.h:323
#4 0x00000000005f5491 in pg_get_publication_sequences
(fcinfo=0x2796a00) at pg_publication.c:1334
#5 0x0000000000763d10 in ExecMakeTableFunctionResult
(setexpr=0x27b2fd8, econtext=0x27b2ef8, argContext=0x2796900,
...

Something goes wrong indexing into that 'sequences' list.

1329 funcctx = SRF_PERCALL_SETUP();
1330 sequences = (List *) funcctx->user_fctx;
1331
1332 if (funcctx->call_cntr < list_length(sequences))
1333 {
1334 Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
1335
1336 SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
1337 }

======

Perhaps now it is time to create a CF entry for this thread because
the cfbot could have detected the error earlier.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#69vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#66)
3 attachment(s)
Re: Logical Replication of sequences

On Thu, 4 Jul 2024 at 12:44, Peter Smith <smithpb2250@gmail.com> wrote:

Here are my review comments for the patch v20240703-0002

======
doc/src/sgml/ref/create_publication.sgml

Question: Was there a reason you chose wording "synchronizes changes"
instead of having same "replicates changes" wording of FOR ALL TABLES?

Since at this point we are only supporting sync of sequences, there
are no incremental changes being replicated to subscribers. I thought
synchronization is better suited here.

======
src/backend/catalog/system_views.sql

1.
Should there be some new test for the view? Otherwise, AFAICT this
patch has no tests that will exercise the new function
pg_get_publication_sequences.

pg_publication_sequences view uses pg_get_publication_sequences which
will be tested with 3rd patch while creating subscription/refreshing
publication sequences. I felt it is ok not to have a test here.

5.
- bool for_all_tables; /* Special publication for all tables in db */
+ List    *for_all_objects; /* Special publication for all objects in
+ * db */

Is this OK? Saying "for all objects" seemed misleading.

This change is not required, reverting it.

6.
I asked this before in a previous review [1-#17] -- I didn't
understand the point of the sequence 'testpub_seq0' since nobody seems
to be doing anything with it. Should it just be removed? Or is there a
missing test case to use it?

Since we are having all sequences published I wanted to have a
sequence in another schema also. Adding describe for it too.

~~~

7.
Other things to consider:

(I didn't include these in my attached diff)

* could use a single CREATE SEQUENCE stmt instead of multiple

CREATE SEQUENCE does not support specifying multiple sequences in one
statement, skipping this.

The rest of the comments are fixed, the attached v20240705 version
patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20240705-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240705-0003-Enhance-sequence-synchronization-during-su.patchDownload
From 59b49a3aca0e4a56ccb99ba408fe6b4383ca97f1 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 19 Jun 2024 14:58:14 +0530
Subject: [PATCH v20240705 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
	ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  11 +
 src/backend/catalog/pg_subscription.c         |  63 ++++
 src/backend/commands/subscriptioncmds.c       | 261 +++++++++++++-
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |   9 +
 src/backend/postmaster/bgworker.c             |   3 +
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  91 ++++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 324 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 157 ++++++++-
 src/backend/replication/logical/worker.c      |  15 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_subscription.h         |   6 +
 src/include/catalog/pg_subscription_rel.h     |   1 +
 src/include/nodes/parsenodes.h                |   1 +
 src/include/replication/logicalworker.h       |   1 +
 src/include/replication/worker_internal.h     |  19 +
 src/test/subscription/t/034_sequences.pl      | 145 ++++++++
 src/tools/pgindent/typedefs.list              |   1 +
 23 files changed, 1097 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index f627a3e63c..981ca518f5 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5200,8 +5200,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index ccdd24312b..af775e6fa9 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1984,8 +1984,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the table synchronization workers, sequence
+    synchronization worker and parallel apply workers.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 991f629907..62870aa41b 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 476f195622..fc8a33c0b5 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -27,6 +27,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SKIP ( <replaceable class="parameter">skip_option</replaceable> = <replaceable class="parameter">value</replaceable> )
@@ -194,6 +195,16 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequences information from publisher and re-synchronize the
+      sequence data from the publisher.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..ca88d5cb0e 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -551,3 +552,65 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 	return res;
 }
+
+/*
+ * Get the sequences for the subscription.
+ *
+ * The returned list is palloc'ed in the current memory context.
+ */
+List *
+GetSubscriptionSequences(Oid subid, char state)
+{
+	List	   *res = NIL;
+	Relation	rel;
+	HeapTuple	tup;
+	int			nkeys = 0;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	MemoryContext oldctx;
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[nkeys++],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	if (state != '\0')
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(state));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, nkeys, skey);
+
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subseq;
+		SubscriptionRelState *seqinfo;
+		Datum		d;
+		bool		isnull;
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		subseq = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		seqinfo = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
+		seqinfo->relid = subseq->srrelid;
+		d = SysCacheGetAttr(SUBSCRIPTIONRELMAP, tup,
+							Anum_pg_subscription_rel_srsublsn, &isnull);
+		if (isnull)
+			seqinfo->lsn = InvalidXLogRecPtr;
+		else
+			seqinfo->lsn = DatumGetLSN(d);
+
+		res = lappend(res, seqinfo);
+		MemoryContextSwitchTo(oldctx);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	return res;
+}
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e407428dbc..f7e51dad09 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -102,6 +102,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -759,6 +760,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List *sequences;
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -769,6 +771,22 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 */
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * Get the table list from publisher and build local table status
 			 * info.
@@ -898,6 +916,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		/* Get the table list from publisher. */
 		pubrel_names = fetch_table_list(wrconn, sub->publications);
 
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names, fetch_sequence_list(wrconn, sub->publications));
+
 		/* Get local table list. */
 		subrel_states = GetSubscriptionRelations(sub->oid, false);
 		subrel_count = list_length(subrel_states);
@@ -980,6 +1001,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1006,13 +1028,15 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				/* Stop the worker if relation kind is not sequence*/
+				if (relkind != RELKIND_SEQUENCE)
+					logicalrep_worker_stop(sub->oid, relid);
 
 				/*
 				 * For READY state, we would have already dropped the
 				 * tablesync origin.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)
 				{
 					char		originname[NAMEDATALEN];
 
@@ -1047,7 +1071,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		for (off = 0; off < remove_rel_len; off++)
 		{
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE &&
+				get_rel_relkind(sub_remove_rels[off].relid) != RELKIND_SEQUENCE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1077,6 +1102,142 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Refresh the sequences data of the subscription.
+ */
+static void
+AlterSubscription_refreshsequences(Subscription *sub)
+{
+	char	   *err;
+	List	   *pubseq_names = NIL;
+	List	   *subseq_states;
+	Oid		   *subseq_local_oids;
+	Oid		   *pubseq_local_oids;
+	int			off;
+	int			subrel_count;
+	Relation	rel = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	PG_TRY();
+	{
+		/* Get the sequences from the publisher. */
+		pubseq_names = fetch_sequence_list(wrconn, sub->publications);
+
+		/* Get local sequence list. */
+		subseq_states = GetSubscriptionSequences(sub->oid, '\0');
+		subrel_count = list_length(subseq_states);
+
+		/*
+		 * Build qsorted array of local sequence oids for faster lookup. This
+		 * can potentially contain all sequences in the database so speed of
+		 * lookup is important.
+		 */
+		subseq_local_oids = palloc(subrel_count * sizeof(Oid));
+		off = 0;
+		foreach_ptr(SubscriptionSeqInfo, seqinfo, subseq_states)
+			subseq_local_oids[off++] = seqinfo->seqid;
+
+		qsort(subseq_local_oids, subrel_count, sizeof(Oid), oid_cmp);
+
+		/*
+		 * Walk over the remote sequences and try to match them to locally
+		 * known sequences. If the sequence is not known locally create a new
+		 * state for it.
+		 *
+		 * Also builds array of local oids of remote sequences for the next
+		 * step.
+		 */
+		off = 0;
+		pubseq_local_oids = palloc(list_length(pubseq_names) * sizeof(Oid));
+
+		foreach_ptr(RangeVar, rv, pubseq_names)
+		{
+			Oid			relid;
+
+			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+			/* Check for supported relkind. */
+			CheckSubscriptionRelkind(get_rel_relkind(relid),
+									 rv->schemaname, rv->relname);
+
+			pubseq_local_oids[off++] = relid;
+
+			if (!bsearch(&relid, subseq_local_oids,
+						 subrel_count, sizeof(Oid), oid_cmp))
+			{
+				AddSubscriptionRelState(sub->oid, relid,
+										SUBREL_STATE_INIT,
+										InvalidXLogRecPtr, true);
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" added to subscription \"%s\"",
+										 rv->schemaname, rv->relname, sub->name)));
+			}
+		}
+
+		/*
+		 * Next remove state for sequences we should not care about anymore
+		 * using the data we collected above
+		 */
+		qsort(pubseq_local_oids, list_length(pubseq_names),
+			  sizeof(Oid), oid_cmp);
+
+		for (off = 0; off < subrel_count; off++)
+		{
+			Oid			relid = subseq_local_oids[off];
+
+			if (!bsearch(&relid, pubseq_local_oids,
+						 list_length(pubseq_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * This locking ensures that the state of rels won't change
+				 * till we are done with this refresh operation.
+				 */
+				if (!rel)
+					rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+				RemoveSubscriptionRel(sub->oid, relid);
+
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+			}
+			else
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
+			}
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+
+	if (rel)
+		table_close(rel, NoLock);
+}
+
 /*
  * Alter the existing subscription.
  */
@@ -1404,6 +1565,20 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refreshsequences(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_REFRESH:
 			{
 				if (!sub->enabled)
@@ -2060,11 +2235,17 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
+		char	   *schemaname;
+		char	   *tablename;
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			schemaname = get_namespace_name(get_rel_namespace(relid));
+			tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2234,6 +2415,72 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	bool		first;
+	List	   *tablelist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "  FROM pg_catalog.pg_publication_sequences s\n"
+						   " WHERE s.pubname IN (");
+	first = true;
+	foreach_ptr(String, pubname, publications)
+	{
+		if (first)
+			first = false;
+		else
+			appendStringInfoString(&cmd, ", ");
+
+		appendStringInfoString(&cmd, quote_literal_cstr(pubname->sval));
+	}
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		tablelist = lappend(tablelist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return tablelist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index f06584f2e7..cc390b3554 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10840,6 +10840,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..6770e26569 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -132,6 +132,9 @@ static const struct
 	},
 	{
 		"TablesyncWorkerMain", TablesyncWorkerMain
+	},
+	{
+		"SequencesyncWorkerMain", SequencesyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 27c3a91fb7..2451eca0fe 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -267,6 +267,39 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 	return res;
 }
 
+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
+{
+	int			i;
+	LogicalRepWorker *res = NULL;
+
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	/* Search for attached worker for a given subscription id. */
+	for (i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		/* Skip non sequence sync workers. */
+		if (!isSequencesyncWorker(w))
+			continue;
+
+		if (w->in_use && w->subid == subid && (only_running && w->proc))
+		{
+			res = w;
+			break;
+		}
+	}
+
+	return res;
+}
+
 /*
  * Similar to logicalrep_worker_find(), but returns a list of all workers for
  * the subscription, instead of just one.
@@ -291,6 +324,33 @@ logicalrep_workers_find(Oid subid, bool only_running)
 	return res;
 }
 
+/*
+ * Return the pid of the apply worker for one that matches given
+ * subscription id.
+ */
+static LogicalRepWorker *
+logicalrep_apply_worker_find(Oid subid, bool only_running)
+{
+	int			i;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	for (i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		if (isApplyWorker(w) && w->subid == subid && (only_running && w->proc))
+		{
+			LWLockRelease(LogicalRepWorkerLock);
+			return w;
+		}
+	}
+
+	LWLockRelease(LogicalRepWorkerLock);
+
+	return NULL;
+}
+
 /*
  * Start new logical replication background worker, if possible.
  *
@@ -311,6 +371,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -320,7 +381,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - parallel apply worker is the only kind of subworker
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
-	Assert(is_tablesync_worker == OidIsValid(relid));
+	Assert(is_tablesync_worker == OidIsValid(relid) || is_sequencesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
 
 	ereport(DEBUG1,
@@ -396,7 +457,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -491,6 +553,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequencesyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -809,6 +879,20 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequence sync worker failure time
+ *
+ * Called on sequence sync worker failure exit.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+	worker = logicalrep_apply_worker_find(MyLogicalRepWorker->subid, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+}
+
 /*
  * Cleanup function.
  *
@@ -1351,6 +1435,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..92980e8e25
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,324 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/lsyscache.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * Fetch sequence data (current state) from the remote node, including the
+ * page LSN.
+ */
+static int64
+fetch_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {INT8OID, LSNOID};
+	int64		value = (Datum) 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT (last_value + log_cnt), page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		bool		isnull;
+
+		value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		*lsn = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char			relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for table \"%s.%s\" from publisher: %s",
+						nspname, RelationGetRelationName(rel), res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	/*
+	 * Logical replication of sequences is based on decoding WAL records,
+	 * describing the "next" state of the sequence the current state in the
+	 * relfilenode is yet to reach. But during the initial sync we read the
+	 * current state, so we need to reconstruct the WAL record logged when we
+	 * started the current batch of sequence values.
+	 *
+	 * Otherwise we might get duplicate values (on subscriber) if we failed
+	 * over right after the sync.
+	 */
+	sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
+
+	/* sets the sequence with sequence_value */
+	SetSequenceLastValue(RelationGetRelid(rel), sequence_value);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences()
+{
+	char	   *err;
+	bool		must_use_password;
+	List *sequences;
+	char	   slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner  = false;
+	int 		curr_seq = 0;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	/* Get the sequences that should be synchronized. */
+	StartTransactionCommand();
+	sequences = GetSubscriptionSequences(subid,
+										 SUBREL_STATE_INIT);
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences)
+	{
+		Relation	sequencerel;
+		XLogRecPtr	sequence_lsn;
+		int			next_seq;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)
+			StartTransactionCommand();
+
+		sequencerel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless the
+		 * user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequencerel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequence sync worker has permission to insert into the
+		 * target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequencerel), GetUserId(),
+									ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						get_relkind_objtype(sequencerel->rd_rel->relkind),
+						RelationGetRelationName(sequencerel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or superuser,
+		 * who has it implicitly), but other roles should not be able to
+		 * circumvent RLS.  Disallow logical replication into RLS enabled
+		 * relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into relation with row-level security enabled: \"%s\"",
+							GetUserNameFromId(GetUserId(), true),
+							RelationGetRelationName(sequencerel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequencerel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequencerel, NoLock);
+
+		next_seq = curr_seq + 1;
+		if (((next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) || next_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			int i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);
+			for (; i <= curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			CommitTransactionCommand();
+		}
+
+		curr_seq++;
+	}
+
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if it's required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication Sequencesync worker entry point */
+void
+SequencesyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(false);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b00267f042..a15b6cdf0e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -139,9 +139,9 @@ static StringInfo copybuf = NULL;
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(bool istable)
 {
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
@@ -157,15 +157,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (istable)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequences synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* No need to set the failure time in case of a clean exit */
+	if (!istable)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -387,7 +396,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(true);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -463,6 +472,17 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	foreach(lc, table_states_not_ready)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind == RELKIND_SEQUENCE)
+			continue;
 
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
@@ -477,11 +497,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -660,6 +675,113 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence synchronization worker running already, no need to
+ * start a sequence synchronization in this case. The existing sequence
+ * sync worker will synchronize the sequences. If there are still any sequences
+ * to be synced after the sequence sync worker exited, then we new sequence
+ * sync worker can be started in the next iteration. To prevent starting the
+ * sequence sync worker at a high frequency after a failure, we store its last
+ * start time. We start the sync worker for the same relation after waiting
+ * at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription tables here. */
+	FetchTableStates(&started_tx);
+
+	/*
+	 * Start sequence sync worker if there is no sequence sync worker running.
+	 */
+	foreach_ptr(SubscriptionRelState, rstate, table_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		char relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
+															true);
+		/*
+		 * If there is a sequence sync worker, the sequence sync worker
+		 * will handle sync of this sequence.
+		 */
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int	nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence sync
+			 * worker to sync the sequences and break from the loop, as this
+			 * sequence sync worker will take care of synchronizing all the
+			 * sequences that are in init state.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				TimestampTz now = GetCurrentTimestamp();
+				if (!MyLogicalRepWorker->sequencesync_failure_time ||
+					TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+											   now, wal_retrieve_retry_interval))
+				{
+					MyLogicalRepWorker->sequencesync_failure_time = 0;
+					logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+												MyLogicalRepWorker->dbid,
+												MySubscription->oid,
+												MySubscription->name,
+												MyLogicalRepWorker->userid,
+												InvalidOid,
+												DSM_HANDLE_INVALID);
+					break;
+				}
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -682,9 +804,16 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			process_syncing_sequences_for_apply();
 			process_syncing_tables_for_apply(current_lsn);
 			break;
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1320,7 +1449,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(true);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1716,7 +1845,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(true);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 3b285894db..d0b07154d4 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,12 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		/* Sequence sync is not expected to come here */
+		case WORKERTYPE_SEQUENCESYNC:
+			Assert(0);
+			/* not reached, here to make compiler happy */
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4631,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequences synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4646,7 +4656,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4678,6 +4688,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  invalidate_syncing_table_states,
 								  (Datum) 0);
+
+	if (isSequencesyncWorker(MyLogicalRepWorker))
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index da608d074b..5c9586a5b9 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 0aa14ec4a2..8c96f0ce72 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -159,6 +159,12 @@ typedef struct Subscription
 								 * specified origin */
 } Subscription;
 
+typedef struct SubscriptionSeqInfo
+{
+	Oid			seqid;
+	XLogRecPtr	lsn;
+} SubscriptionSeqInfo;
+
 /* Disallow streaming in-progress transactions. */
 #define LOGICALREP_STREAM_OFF 'f'
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..3cf7834f8d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -91,5 +91,6 @@ extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionSequences(Oid subid, char state);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 0f9a46a563..7f53cd9b71 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,6 +4230,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..f380c1ba60 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -19,6 +19,7 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
 extern void TablesyncWorkerMain(Datum main_arg);
+extern void SequencesyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 515aefd519..3701b1566c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -32,6 +32,7 @@ typedef enum LogicalRepWorkerType
 	WORKERTYPE_TABLESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
+	WORKERTYPE_SEQUENCESYNC,
 } LogicalRepWorkerType;
 
 typedef struct LogicalRepWorker
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -240,6 +243,8 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
+extern LogicalRepWorker *logicalrep_sequence_sync_worker_find(Oid subid,
+															  bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running);
 extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
@@ -252,6 +257,10 @@ extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(bool istable);
+
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -325,10 +334,14 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
 #define isTablesyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequencesyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
@@ -336,6 +349,12 @@ am_tablesync_worker(void)
 	return isTablesyncWorker(MyLogicalRepWorker);
 }
 
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequencesyncWorker(MyLogicalRepWorker);
+}
+
 static inline bool
 am_leader_apply_worker(void)
 {
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..94bf83a14b
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,145 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE seq_test (v BIGINT);
+	CREATE SEQUENCE s;
+	CREATE SEQUENCE s2;
+	CREATE SEQUENCE s3;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION seq_pub FOR ALL SEQUENCES");
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
+);
+
+# Wait for initial sync to finish as well
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# create a new sequence, it should be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s2;
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# changes to existing sequences should not be synced
+$node_publisher->safe_psql(
+	'postgres', qq(
+	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+# Refresh publication after create a new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+# Changes of both new and existing sequence should be synced after REFRESH
+# PUBLICATION SEQUENCES.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE s3;
+	INSERT INTO seq_test SELECT nextval('s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+));
+
+# Refresh publication sequences after create new sequence and updating existing
+# sequence.
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION SEQUENCES
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s2;
+));
+
+is($result, '231|0|t', 'initial test data replicated');
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s3;
+));
+
+is($result, '132|0|t', 'initial test data replicated');
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index cd426f4214..8d37b45ade 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2770,6 +2770,7 @@ SubscriptingRefState
 Subscription
 SubscriptionInfo
 SubscriptionRelState
+SubscriptionSeqInfo
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
 SupportRequestIndexCondition
-- 
2.34.1

v20240705-0001-Introduce-pg_sequence_state-and-SetSequenc.patchtext/x-patch; charset=US-ASCII; name=v20240705-0001-Introduce-pg_sequence_state-and-SetSequenc.patchDownload
From d0806540c8aacb76c88463928855f915b2de2b17 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240705 1/3] Introduce pg_sequence_state and
 SetSequenceLastValue functions for enhanced sequence management

This patch introduces a couple of new functions: pg_sequence_state function
allows retrieval of sequence values including LSN. The SetSequenceLastValue
funcion enables updating sequences with specified values.
---
 doc/src/sgml/func.sgml                 |  27 ++++
 src/backend/commands/sequence.c        | 169 +++++++++++++++++++++++--
 src/include/catalog/pg_proc.dat        |   8 ++
 src/include/commands/sequence.h        |   1 +
 src/test/regress/expected/sequence.out |  12 ++
 src/test/regress/sql/sequence.sql      |   2 +
 6 files changed, 211 insertions(+), 8 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 93ee3d4b60..9429207138 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19506,6 +19506,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ()
+        <returnvalue>record</returnvalue>
+        ( <parameter>lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>lsn</literal> is the
+        page LSN of the sequence, <literal>last_value</literal> is the most
+        recent value returned by <function>nextval</function> in the current
+        session, <literal>log_cnt</literal> shows how many fetches remain
+        before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 9f28d40466..bff990afa7 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,83 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ *
+ * Note: This function resembles do_setval but does not include the locking and
+ * verification steps, as those are managed in a slightly different manner for
+ * logical replication.
+ */
+void
+SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* check the comment above nextval_internal()'s equivalent call. */
+	if (RelationNeedsWAL(seqrel))
+	{
+		GetTopTransactionId();
+
+		if (XLogLogicalInfoActive())
+			GetCurrentTransactionId();
+	}
+
+	/* ready to change the on-disk (or really, in-buffer) tuple */
+	START_CRIT_SECTION();
+
+	seq->last_value = new_last_value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	MarkBufferDirty(buf);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(seqrel))
+	{
+		xl_seq_rec      xlrec;
+		XLogRecPtr      recptr;
+		Page            page = BufferGetPage(buf);
+
+		XLogBeginInsert();
+		XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+		xlrec.locator = seqrel->rd_locator;
+
+		XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+		XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+		recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buf);
+
+	/*
+	 * Clear local cache so that we don't think we have cached numbers.
+	 * Note that we do not change the currval() state.
+	 */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +555,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +637,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +766,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +1062,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1262,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1287,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1807,7 +1894,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1822,6 +1909,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index e1001a4822..02a096664c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..003f2e3413 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 2b47b7796b..cbcd65f499 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 674f5f1f66..5fcb36341d 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240705-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240705-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 93edaa7331b72ad9bfebae12aa53b69ac527cef7 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240705 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Additionally, a new system view, pg_publication_sequences, has been
introduced to list all sequences added to a publication. Furthermore,
enhancements to psql commands (\d and \dRp) now allow for better display
of publications containing specific sequences or sequences included in a
publication.

Note: This patch currently supports only the "ALL SEQUENCES" command.
Handling of commands such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  30 +-
 doc/src/sgml/system-views.sgml            |  67 +++
 src/backend/catalog/pg_publication.c      |  86 +++-
 src/backend/catalog/system_views.sql      |  10 +
 src/backend/commands/publicationcmds.c    |  23 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 193 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_proc.dat           |   5 +
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  17 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 487 ++++++++++++----------
 src/test/regress/expected/rules.out       |   8 +
 src/test/regress/sql/publication.sql      |  31 ++
 src/tools/pgindent/typedefs.list          |   2 +
 19 files changed, 818 insertions(+), 294 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..9891fef110 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index bdc34cf94e..b893fc2d90 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2147,6 +2152,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
@@ -1254,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..77ab4097e3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1919,6 +1923,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 							NameStr(form->pubname)),
 					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
 
+		if (form->puballsequences && !superuser_arg(newOwnerId))
+			ereport(ERROR,
+					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					 errmsg("permission denied to change owner of publication \"%s\"",
+							NameStr(form->pubname)),
+					 errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..f06584f2e7 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *allbjects_list,
+											bool *alltables,
+											bool *allsequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10561,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [,...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10586,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
 					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10704,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19439,49 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to check if the options have been specified more than
+ * once and set alltables/allsequences.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *alltables,
+								bool *allsequences, core_yyscan_t yyscanner)
+{
+	bool		alltables_specified = false;
+	bool		allsequences_specified = false;
+
+	if (!all_objects_list)
+		return;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (alltables_specified)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			alltables_specified = true;
+			*alltables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (allsequences_specified)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			allsequences_specified = true;
+			*allsequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 5426f1177c..ba59e5f306 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4206,6 +4206,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4222,23 +4223,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 170000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4250,6 +4257,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4269,6 +4277,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4316,8 +4326,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..5054be0fd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..cdd6f11989 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,52 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* print any publications */
+		if (pset.sversion >= 170000)
+		{
+			int			tuples = 0;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+			else
+				tuples = PQntuples(result);
+
+			if (tuples > 0)
+				printTableAddFooter(&cont, _("Publications:"));
+
+			/* Might be an empty set - that's ok */
+			for (i = 0; i < tuples; i++)
+			{
+				printfPQExpBuffer(&buf, "    \"%s\"",
+								  PQgetvalue(result, i, 0));
+
+				printTableAddFooter(&cont, buf.data);
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2110,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6292,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6309,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 170000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6424,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6441,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 170000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6455,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6510,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6519,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6531,10 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
+
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index d453e224d9..da608d074b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 02a096664c..35d084b590 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11950,6 +11950,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..0f9a46a563 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * All Publication type
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,		/* All tables */
+	PUBLICATION_ALL_SEQUENCES,	/* All sequences */
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType	pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,7 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..2f8e2c6457 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,91 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences';
+           pubname           | puballtables | puballsequences 
+-----------------------------+--------------+-----------------
+ regress_pub_forallsequences | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences"
+
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences"
+
+\dRp+ regress_pub_forallsequences
+                                  Publication regress_pub_forallsequences
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +310,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +328,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +360,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +376,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +395,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +406,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +442,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +573,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +790,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +977,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1185,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1226,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1307,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1320,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1349,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1375,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1446,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1457,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1478,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1490,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1502,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1513,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1524,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1535,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1566,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1578,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1660,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1681,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4c789279e5..bd5efd5d27 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..d27127248f 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,37 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences';
+\d+ regress_pub_seq0
+\d+ pub_test.regress_pub_seq1
+\dRp+ regress_pub_forallsequences
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e710fa48e5..cd426f4214 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2245,6 +2245,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

#70vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#68)
Re: Logical Replication of sequences

On Fri, 5 Jul 2024 at 09:46, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh.

After applying the v20240703-0003 patch, I was always getting errors
when running the subscription TAP tests.

# +++ tap check in src/test/subscription +++
t/001_rep_changes.pl ............... ok
t/002_types.pl ..................... ok
t/003_constraints.pl ............... ok
t/004_sync.pl ...................... ok
t/005_encoding.pl .................. ok
t/006_rewrite.pl ................... ok
t/007_ddl.pl ....................... 3/?
# Failed test 'Alter subscription set publication throws warning for
non-existent publication'
# at t/007_ddl.pl line 67.
Bailout called. Further testing stopped: pg_ctl stop failed
# Tests were run but no plan was declared and done_testing() was not seen.
FAILED--Further testing stopped: pg_ctl stop failed
make: *** [check] Error 255

~~~

The publisher log shows an Assert TRAP occurred:

2024-07-04 18:15:40.089 AEST [745] mysub1 LOG: statement: SELECT
DISTINCT s.schemaname, s.sequencename
FROM pg_catalog.pg_publication_sequences s
WHERE s.pubname IN ('mypub', 'non_existent_pub', 'non_existent_pub1',
'non_existent_pub2')
TRAP: failed Assert("IsA(list, OidList)"), File:
"../../../src/include/nodes/pg_list.h", Line: 323, PID: 745

~~~

A debugging backtrace looks like below:

Core was generated by `postgres: publisher: walsender postgres
postgres [local] SELECT '.
Program terminated with signal 6, Aborted.
#0 0x00007f36f44f02c7 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-260.el7_6.6.x86_64 pcre-8.32-17.el7.x86_64
(gdb) bt
#0 0x00007f36f44f02c7 in raise () from /lib64/libc.so.6
#1 0x00007f36f44f19b8 in abort () from /lib64/libc.so.6
#2 0x0000000000bb8be1 in ExceptionalCondition (conditionName=0xc7aa6c
"IsA(list, OidList)",
fileName=0xc7aa10 "../../../src/include/nodes/pg_list.h",
lineNumber=323) at assert.c:66
#3 0x00000000005f2c57 in list_nth_oid (list=0x27948f0, n=0) at
../../../src/include/nodes/pg_list.h:323
#4 0x00000000005f5491 in pg_get_publication_sequences
(fcinfo=0x2796a00) at pg_publication.c:1334
#5 0x0000000000763d10 in ExecMakeTableFunctionResult
(setexpr=0x27b2fd8, econtext=0x27b2ef8, argContext=0x2796900,
...

Something goes wrong indexing into that 'sequences' list.

1329 funcctx = SRF_PERCALL_SETUP();
1330 sequences = (List *) funcctx->user_fctx;
1331
1332 if (funcctx->call_cntr < list_length(sequences))
1333 {
1334 Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
1335
1336 SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
1337 }

I was not able to reproduce this issue after several runs, but looks
like sequences need to be initialized here.

Perhaps now it is time to create a CF entry for this thread because
the cfbot could have detected the error earlier.

I have added a commitfest entry for the same at [1]https://commitfest.postgresql.org/49/5111/.

The v20240705 version patch attached at [2]/messages/by-id/CALDaNm3WvLUesGq54JagEkbBh4CBfMoT84Rw7HjL8KML_BSzPw@mail.gmail.com has the change for the same.

[1]: https://commitfest.postgresql.org/49/5111/
[2]: /messages/by-id/CALDaNm3WvLUesGq54JagEkbBh4CBfMoT84Rw7HjL8KML_BSzPw@mail.gmail.com

Regards,
Vignesh

#71Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#69)
1 attachment(s)
Re: Logical Replication of sequences

On Fri, Jul 5, 2024 at 9:58 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 4 Jul 2024 at 12:44, Peter Smith <smithpb2250@gmail.com> wrote:

1.
Should there be some new test for the view? Otherwise, AFAICT this
patch has no tests that will exercise the new function
pg_get_publication_sequences.

pg_publication_sequences view uses pg_get_publication_sequences which
will be tested with 3rd patch while creating subscription/refreshing
publication sequences. I felt it is ok not to have a test here.

OTOH, if there had been such a test here then the ("sequence = NIL")
bug in patch 0002 code would have been caught earlier in patch 0002
testing instead of later in patch 0003 testing. In general, I think
each patch should be self-contained w.r.t. to testing all of its new
code, but if you think another test here is overkill then I am fine
with that too.

//////////

Meanwhile, here are my review comments for patch v20240705-0002

======
doc/src/sgml/ref/create_publication.sgml

1.
The CREATE PUBLICATION page has many examples showing many different
combinations of syntax. I think it would not hurt to add another one
showing SEQUENCES being used.

======
src/backend/commands/publicationcmds.c

2.
+ if (form->puballsequences && !superuser_arg(newOwnerId))
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied to change owner of publication \"%s\"",
+ NameStr(form->pubname)),
+ errhint("The owner of a FOR ALL SEQUENCES publication must be a
superuser.")));

You might consider combining this with the previous error in the same
way that the "FOR ALL TABLES" and "FOR ALL SEQUENCES" errors were
combined in CreatePublication. The result would be less code. But, I
also think your current code is fine, so I am just putting this out as
an idea in case you prefer it.

======
src/backend/parser/gram.y

nitpick - added a space in the comment
nitpick - changed the call order slightly because $6 comes before $7

======
src/bin/pg_dump/pg_dump.c

3. getPublications

- if (fout->remoteVersion >= 130000)
+ if (fout->remoteVersion >= 170000)

This should be 180000.

======
src/bin/psql/describe.c

4. describeOneTableDetails

+ /* print any publications */
+ if (pset.sversion >= 170000)
+ {

This should be 180000.

~~~

describeOneTableDetails:
nitpick - removed a redundant "else"
nitpick - simplified the "Publications:" header logic slightly

~~~

5. listPublications

+ if (pset.sversion >= 170000)
+ appendPQExpBuffer(&buf,
+   ",\n  puballsequences AS \"%s\"",
+   gettext_noop("All sequences"));

This should be 180000.

~~~

6. describePublications

+ has_pubsequence = (pset.sversion >= 170000);

This should be 180000.

~

nitpick - remove some blank lines for consistency with nearby code

======
src/include/nodes/parsenodes.h

nitpick - minor change to comment for PublicationAllObjType
nitpick - the meanings of the enums are self-evident; I didn't think
comments were very useful

======
src/test/regress/sql/publication.sql

7.
I think it will also be helpful to arrange for a SEQUENCE to be
published by *multiple* publications. This would test that they get
listed as expected in the "Publications:" part of the "describe" (\d+)
for the sequence.

======
99.
Please also see the attached diffs patch which implements any nitpicks
mentioned above.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240710_SEQ_0002.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240710_SEQ_0002.txtDownload
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index f06584f..ead6299 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10561,7 +10561,7 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL pub_obj_type [,...] [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
  *
  * pub_obj_type is one of:
  *
@@ -10591,8 +10591,8 @@ CreatePublicationStmt:
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
 					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $7;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index cdd6f11..ea7bb57 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1866,19 +1866,20 @@ describeOneTableDetails(const char *schemaname,
 			result = PSQLexec(buf.data);
 			if (!result)
 				goto error_return;
-			else
-				tuples = PQntuples(result);
 
+			tuples = PQntuples(result);
+			/* Might be an empty set - that's ok */
 			if (tuples > 0)
+			{
 				printTableAddFooter(&cont, _("Publications:"));
 
-			/* Might be an empty set - that's ok */
-			for (i = 0; i < tuples; i++)
-			{
-				printfPQExpBuffer(&buf, "    \"%s\"",
-								  PQgetvalue(result, i, 0));
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
 
-				printTableAddFooter(&cont, buf.data);
+					printTableAddFooter(&cont, buf.data);
+				}
 			}
 			PQclear(result);
 		}
@@ -6531,10 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-
 		if (has_pubsequence)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
-
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 0f9a46a..798b034 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4163,12 +4163,12 @@ typedef struct PublicationObjSpec
 } PublicationObjSpec;
 
 /*
- * All Publication type
+ * Publication types supported by FOR ALL ...
  */
 typedef enum PublicationAllObjType
 {
-	PUBLICATION_ALL_TABLES,		/* All tables */
-	PUBLICATION_ALL_SEQUENCES,	/* All sequences */
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
 } PublicationAllObjType;
 
 typedef struct PublicationAllObjSpec
#72Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#69)
1 attachment(s)
Re: Logical Replication of sequences

Here are a few comments for patch v20240705-0003.

(This is a WIP. I have only looked at the docs so far.)

======
doc/src/sgml/config.sgml

nitpick - max_logical_replication_workers: /and sequence
synchornization worker/and a sequence synchornization worker/

======
doc/src/sgml/logical-replication.sgml

nitpick - max_logical_replication_workers: re-order list of workers to
be consistent with other docs 1-apply,2-parallel,3-tablesync,4-seqsync

======
doc/src/sgml/ref/alter_subscription.sgml

1.
IIUC the existing "REFRESH PUBLICATION" command will fetch and sync
all new sequences, etc., and/or remove old ones no longer in the
publication. But current docs do not say anything at all about
sequences here. It should say something about sequence behaviour.

~~~

2.
For the existing "REFRESH PUBLICATION" there is a sub-option
"copy_data=true/false". Won't this need some explanation about how it
behaves for sequences? Or will there be another option
"copy_sequences=true/false".

~~~

3.
IIUC the main difference between REFRESH PUBLICATION and REFRESH
PUBLICATION SEQUENCES is that the 2nd command will try synchronize
with all the *existing* sequences to bring them to the same point as
on the publisher, but otherwise, they are the same command. If that is
correct understanding I don't think that distinction is made very
clear in the current docs.

~~~

nitpick - the synopsis is misplaced. It should not be between ENABLE
and DISABLE. I moved it. Also, it should say "REFRESH PUBLICATION
SEQUENCES" because that is how the new syntax is defined in gram.y

nitpick - REFRESH SEQUENCES. Renamed to "REFRESH PUBLICATION
SEQUENCES". And, shouldn't "from the publisher" say "with the
publisher"?

nitpick - changed the varlistentry "id".

======
99.
Please also see the attached diffs patch which implements any nitpicks
mentioned above.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240710_SEQ_0003_DOCS.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240710_SEQ_0003_DOCS.txtDownload
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 981ca51..645fc48 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5201,7 +5201,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Specifies maximum number of logical replication workers. This includes
         leader apply workers, parallel apply workers, table synchronization
-        workers and sequence synchronization worker.
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index af775e6..1d3bc6a 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1984,8 +1984,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers, sequence
-    synchronization worker and parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fc8a33c..f81faf9 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,8 +26,8 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
-ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SKIP ( <replaceable class="parameter">skip_option</replaceable> = <replaceable class="parameter">value</replaceable> )
@@ -195,12 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-altersubscription-params-refresh-sequences">
-    <term><literal>REFRESH SEQUENCES</literal></term>
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
     <listitem>
      <para>
       Fetch missing sequences information from publisher and re-synchronize the
-      sequence data from the publisher.
+      sequence data with the publisher.
      </para>
     </listitem>
    </varlistentry>
#73Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#69)
1 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh. Here are the rest of my comments for patch v20240705-0003.

(Apologies for the length of this post; but it was unavoidable due to
this being the 1st review of a very large large 1700-line patch)

======
src/backend/catalog/pg_subscription.c

1. GetSubscriptionSequences

+/*
+ * Get the sequences for the subscription.
+ *
+ * The returned list is palloc'ed in the current memory context.
+ */

Is that comment right? The palloc seems to be done in
CacheMemoryContext, not in the current context.

~

2.
The code is very similar to the other function
GetSubscriptionRelations(). In fact I did not understand how the 2
functions know what they are returning:

E.g. how does GetSubscriptionRelations not return sequences too?
E.g. how does GetSubscriptionSequences not return relations too?

======
src/backend/commands/subscriptioncmds.c

CreateSubscription:
nitpick - put the sequence logic *after* the relations logic because
that is the order that seems used everywhere else.

~~~

3. AlterSubscription_refresh

- logicalrep_worker_stop(sub->oid, relid);
+ /* Stop the worker if relation kind is not sequence*/
+ if (relkind != RELKIND_SEQUENCE)
+ logicalrep_worker_stop(sub->oid, relid);

Can you give more reasons in the comment why skip the stop for sequence worker?

~

nitpick - period and space in the comment

~~~

4.
  for (off = 0; off < remove_rel_len; off++)
  {
  if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
- sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE &&
+ get_rel_relkind(sub_remove_rels[off].relid) != RELKIND_SEQUENCE)
  {
Would this new logic perhaps be better written as:

if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
continue;

~~~

AlterSubscription_refreshsequences:
nitpick - rename AlterSubscription_refresh_sequences

~
5.
There is significant code overlap between the existing
AlterSubscription_refresh and the new function
AlterSubscription_refreshsequences. I wonder if it is better to try to
combine the logic and just pass another parameter to
AlterSubscription_refresh saying to update the existing sequences if
necessary. Particularly since the AlterSubscription_refresh is already
tweaked to work for sequences. Of course, the resulting combined
function would be large and complex, but maybe that would still be
better than having giant slabs of nearly identical cut/paste code.
Thoughts?

~~~

check_publications_origin:
nitpick - move variable declarations
~~~

fetch_sequence_list:
nitpick - change /tablelist/seqlist/
nitpick - tweak the spaces of the SQL for alignment (similar to
fetch_table_list)

~

6.
+    " WHERE s.pubname IN (");
+ first = true;
+ foreach_ptr(String, pubname, publications)
+ {
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname->sval));
+ }
+ appendStringInfoChar(&cmd, ')');

IMO this can be written much better by using get_publications_str()
function to do all this list work.

======
src/backend/replication/logical/launcher.c

7. logicalrep_worker_find

/*
* Walks the workers array and searches for one that matches given
* subscription id and relid.
*
* We are only interested in the leader apply worker or table sync worker.
*/

The above function comment (not in the patch 0003) is stale because
this AFAICT this is also going to return sequence workers if it finds
one.

~~~

8. logicalrep_sequence_sync_worker_find

+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)

There are other similar functions for walking the workers array to
search for a worker. Instead of having different functions for
different cases, wouldn't it be cleaner to combine these into a single
function, where you pass a parameter (e.g. a mask of worker types that
you are interested in finding)?

~

nitpick - declare a for loop variable 'i'

~~~

9. logicalrep_apply_worker_find

+static LogicalRepWorker *
+logicalrep_apply_worker_find(Oid subid, bool only_running)

All the other find* functions assume the lock is already held
(Assert(LWLockHeldByMe(LogicalRepWorkerLock));). But this one is
different. IMO it might be better to acquire the lock in the caller to
make all the find* functions look the same. Anyway, that will help to
combine everything into 1 "find" worker as suggested in the previous
review comment #8.

~

nitpick - declare a for loop variable 'i'
nitpick - removed unnecessary parens in condition.

~~~

10. logicalrep_worker_launch

/*----------
* Sanity checks:
* - must be valid worker type
* - tablesync workers are only ones to have relid
* - parallel apply worker is the only kind of subworker
*/

The above code-comment (not in the 0003 patch) seems stale. This
should now also mention sequence sync workers, right?

~~~

11.
- Assert(is_tablesync_worker == OidIsValid(relid));
+ Assert(is_tablesync_worker == OidIsValid(relid) ||
is_sequencesync_worker == OidIsValid(relid));

IIUC there is only a single sequence sync worker for handling all the
sequences. So, what does the 'relid' actually mean here when there are
multiple sequences?

~~~

12. logicalrep_seqsyncworker_failuretime

+/*
+ * Set the sequence sync worker failure time
+ *
+ * Called on sequence sync worker failure exit.
+ */

12a.
The comment should be improved to make it more clear that the failure
time of the sync worker information is stored with the *apply* worker.
See also other review comments in this post about this area -- perhaps
all this can be removed?

~

12b.
Curious if this had to be a separate exit handler or if may this could
have been handled by the existing logicalrep_worker_onexit handler.
See also other review comments int this post about this area --
perhaps all this can be removed?

======
.../replication/logical/sequencesync.c

13. fetch_sequence_data

13a.
The function comment has no explanation of what exactly the returned
value means. It seems like it is what you will assign as 'last_value'
on the subscriber-side.

~

13b.
Some of the table functions like this are called like
'fetch_remote_table_info()'. Maybe it is better to do similar here
(e.g. include the word "remote" in the function name).

~

14.
The reason for the addition logic "(last_value + log_cnt)" is not
obvious. I am guessing it might be related to code from
'nextval_internal' (fetch = log = fetch + SEQ_LOG_VALS;) but it is
complicated. It is unfortunate that the field 'log_cnt' seems hardly
commented anywhere at all.

Also, I am not 100% sure if I trust the logic in the first place. The
caller of this function is doing:
sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
/* sets the sequence with sequence_value */
SetSequenceLastValue(RelationGetRelid(rel), sequence_value);

Won't that mean you can get to a situation where subscriber-side
result of lastval('s') can be *ahead* from lastval('s') on the
publisher? That doesn't seem good.

~~~

copy_sequence:

nitpick - ERROR message. Reword "for table..." to be more like the 2nd
error message immediately below.
nitpick - /RelationGetRelationName(rel)/relname/
nitpick - moved the Assert for 'relkind' to be nearer the assignment.

~

15.
+ /*
+ * Logical replication of sequences is based on decoding WAL records,
+ * describing the "next" state of the sequence the current state in the
+ * relfilenode is yet to reach. But during the initial sync we read the
+ * current state, so we need to reconstruct the WAL record logged when we
+ * started the current batch of sequence values.
+ *
+ * Otherwise we might get duplicate values (on subscriber) if we failed
+ * over right after the sync.
+ */
+ sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
+
+ /* sets the sequence with sequence_value */
+ SetSequenceLastValue(RelationGetRelid(rel), sequence_value);

(This is related to some earlier review comment #14 above). IMO all
this tricky commentary belongs in the function header of
"fetch_sequence_data", where it should be describing that function's
return value.

~~~

LogicalRepSyncSequences:
nitpick - declare void param
nitpick indentation
nitpick - wrapping
nitpick - /sequencerel/sequence_rel/
nitpick - blank lines

~

16.
+ if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid,
false) == RLS_ENABLED)
+ ereport(ERROR,
+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("user \"%s\" cannot replicate into relation with row-level
security enabled: \"%s\"",
+ GetUserNameFromId(GetUserId(), true),
+ RelationGetRelationName(sequencerel)));

This should be reworded to refer to sequences instead of relations. Maybe like:
user \"%s\" cannot replicate into sequence \"%s\" with row-level
security enabled"

~

17.
The Calculations involving the BATCH size seem a bit tricky.
e.g. in 1st place it is doing: (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)
e.g. in 2nd place it is doing: (next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0)

Maybe this batch logic can be simplified somehow using a bool variable
for the calculation?

Also, where does the number 100 come from? Why not 1000? Why not 10?
Why have batching at all? Maybe there should be some comment to
describe the reason and the chosen value.

~

18.
+ next_seq = curr_seq + 1;
+ if (((next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) || next_seq == seq_count)
+ {
+ /* LOG all the sequences synchronized during current batch. */
+ int i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);
+ for (; i <= curr_seq; i++)
+ {
+ SubscriptionRelState *done_seq;
+ done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences, i));
+ ereport(LOG,
+ errmsg("logical replication synchronization for subscription \"%s\",
sequence \"%s\" has finished",
+    get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+ }
+
+ CommitTransactionCommand();
+ }
+
+ curr_seq++;

I feel this batching logic needs more comments describing what you are
doing here.

~~~

SequencesyncWorkerMain:
nitpick - spaces in the function comment

======
src/backend/replication/logical/tablesync.c

19. finish_sync_worker

-finish_sync_worker(void)
+finish_sync_worker(bool istable)

IMO, for better readability (here and in the callers) the new
parameter should be the enum LogicalRepWorkerType. Since we have that
enum, might as well make good use of it.

~

nitpick - /sequences synchronization worker/sequence synchronization worker/
nitpick - comment tweak

~

20.
+ char relkind;
+
+ if (!started_tx)
+ {
+ StartTransactionCommand();
+ started_tx = true;
+ }
+
+ relkind = get_rel_relkind(rstate->relid);
+ if (relkind == RELKIND_SEQUENCE)
+ continue;

I am wondering is it possible to put the relkind check can come
*before* the TX code here, because in case there are *only* sequences
then maybe every would be skipped and there would have been no need
for any TX at all in the first place.

~~~

process_syncing_sequences_for_apply:

nitpick - fix typo and slight reword function header comment. Also
/last start time/last failure time/
nitpick - tweak comments
nitpick - blank lines

~

21.
+ if (!started_tx)
+ {
+ StartTransactionCommand();
+ started_tx = true;
+ }
+
+ relkind = get_rel_relkind(rstate->relid);
+ if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+ continue;

Wondering (like in review comment #20) if it is possible to swap those
because maybe there was no reason for any TX if the other condition
would always continue.

~~~

22.
+ if (nsyncworkers < max_sync_workers_per_subscription)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;

It seems to me that storing 'sequencesync_failure_time' logic may be
unnecessarily complicated. Can't the same "throttling" be achieved by
storing the synchronization worker 'start time' instead of 'fail
time', in which case then you won't have to mess around with
considering if the sync worker failed or just exited normally etc? You
might also be able to remove all the
logicalrep_seqsyncworker_failuretime() exit handler code.

~~~

process_syncing_tables:
nitpick - let's process tables before sequences (because all other
code is generally in this same order)
nitpick - removed some excessive comments about code that is not
supposed to happen

======
src/backend/replication/logical/worker.c

should_apply_changes_for_rel:
nitpick - IMO there were excessive comments for something that is not
going to happen

~~~

23. InitializeLogRepWorker

/*
* Common initialization for leader apply worker, parallel apply worker and
* tablesync worker.
*
* Initialize the database connection, in-memory subscription and necessary
* config options.
*/

That comment (not part of patch 0003) is stale; it should now mention
the sequence sync worker as well, right?

~

nitpick - Tweak plural /sequences sync worker/sequence sync worker/

~~~

24. SetupApplyOrSyncWorker

/* Common function to setup the leader apply or tablesync worker. */

That comment (not part of patch 0003) is stale; it should now mention
the sequence sync worker as well, right?

======
src/include/nodes/parsenodes.h

25.
ALTER_SUBSCRIPTION_ADD_PUBLICATION,
ALTER_SUBSCRIPTION_DROP_PUBLICATION,
ALTER_SUBSCRIPTION_REFRESH,
+ ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,

For consistency with your new enum it would be better to also change
the existing enum name ALTER_SUBSCRIPTION_REFRESH ==>
ALTER_SUBSCRIPTION_REFRESH_PUBLICATION.

======
src/include/replication/logicalworker.h

nitpick - IMO should change the function name
/SequencesyncWorkerMain/SequenceSyncWorkerMain/, and in passing make
the same improvement to the TablesyncWorkerMain function name.

======
src/include/replication/worker_internal.h

26.
WORKERTYPE_PARALLEL_APPLY,
+ WORKERTYPE_SEQUENCESYNC,
} LogicalRepWorkerType;

AFAIK the enum order should not matter here so it would be better to
put the WORKERTYPE_SEQUENCESYNC directly after the
WORKERTYPE_TABLESYNC to keep the similar things together.

~

nitpick - IMO change the macro name
/isSequencesyncWorker/isSequenceSyncWorker/, and in passing make the
same improvement to the isTablesyncWorker macro name.

======
src/test/subscription/t/034_sequences.pl

nitpick - Copyright year
nitpick - Modify the "Create subscriber node" comment for consistency
nitpick - Modify comments slightly for the setup structure parts
nitpick - Add or remove various blank lines
nitpick - Since you have sequences 's2' and 's3', IMO it makes more
sense to call the original sequence 's1' instead of just 's'
nitpick - Rearrange so the CREATE PUBLICATION/SUBSCRIPTION can stay together
nitpick - Modified some comment styles to clearly delineate all the
main "TEST" scenarios
nitpick - In the REFRESH PUBLICATION test the create new sequence and
update existing can be combined (like you do in a later test).
nitpick - Changed some of the test messages for REFRESH PUBLICATION
which seemed wrong
nitpick - Added another test for 's1' in REFRESH PUBLICATION SEQUENCES
nitpick - Changed some of the test messages for REFRESH PUBLICATION
SEQUENCES which seemed wrong

~

27.
IIUC the preferred practice is to give these test object names a
'regress_' prefix.

~

28.
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is($result, '132|0|t', 'initial test data replicated');

28a.
Maybe it is better to say "SELECT last_value, log_cnt, is_called"
instead of "SELECT *" ?
Note - this is in a couple of places.

~

28b.
Can you explain why the expected sequence value its 132, because
AFAICT you only called nextval('s') 100 times, so why isn't it 100?
My guess is that it seems to be related to code in "nextval_internal"
(fetch = log = fetch + SEQ_LOG_VALS;) but it kind of defies
expectations of the test, so if it really is correct then it needs
commentary.

Actually, I found other regression test code that deals with this:
-- log_cnt can be higher if there is a checkpoint just at the right
-- time, so just test for the expected range
SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM
foo_seq_new;

Do you have to do something similar? Or is this a bug? See my other
review comments for function fetch_sequence_data in sequencesync.c

======
99.
Please also see the attached diffs patch which implements any nitpicks
mentioned above.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240712_SEQ_0003.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240712_SEQ_0003.txtDownload
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index f7e51da..b9eaf2b 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -771,37 +771,37 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 */
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
-			/* Add the sequences in init state */
-			sequences = fetch_sequence_list(wrconn, publications);
-			foreach_ptr(RangeVar, rv, sequences)
+			/*
+			 * Get the table list from publisher and build local table status
+			 * info.
+			 */
+			tables = fetch_table_list(wrconn, publications);
+			foreach(lc, tables)
 			{
+				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 				/* Check for supported relkind. */
 				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										rv->schemaname, rv->relname);
+										 rv->schemaname, rv->relname);
 
 				AddSubscriptionRelState(subid, relid, table_state,
 										InvalidXLogRecPtr, true);
 			}
 
-			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
-			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 				/* Check for supported relkind. */
 				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+										rv->schemaname, rv->relname);
 
 				AddSubscriptionRelState(subid, relid, table_state,
 										InvalidXLogRecPtr, true);
@@ -1028,7 +1028,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				/* Stop the worker if relation kind is not sequence*/
+				/* Stop the worker if relation kind is not sequence. */
 				if (relkind != RELKIND_SEQUENCE)
 					logicalrep_worker_stop(sub->oid, relid);
 
@@ -1106,7 +1106,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
  * Refresh the sequences data of the subscription.
  */
 static void
-AlterSubscription_refreshsequences(Subscription *sub)
+AlterSubscription_refresh_sequences(Subscription *sub)
 {
 	char	   *err;
 	List	   *pubseq_names = NIL;
@@ -1574,7 +1574,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-				AlterSubscription_refreshsequences(sub);
+				AlterSubscription_refresh_sequences(sub);
 
 				break;
 			}
@@ -2235,13 +2235,11 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname;
-		char	   *tablename;
 
 		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
 		{
-			schemaname = get_namespace_name(get_rel_namespace(relid));
-			tablename = get_rel_name(relid);
+			char *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char *tablename = get_rel_name(relid);
 
 			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
 							 schemaname, tablename);
@@ -2427,14 +2425,14 @@ fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
 	TupleTableSlot *slot;
 	Oid			tableRow[2] = {TEXTOID, TEXTOID};
 	bool		first;
-	List	   *tablelist = NIL;
+	List	   *seqlist = NIL;
 
 	Assert(list_length(publications) > 0);
 
 	initStringInfo(&cmd);
 	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
-						   "  FROM pg_catalog.pg_publication_sequences s\n"
-						   " WHERE s.pubname IN (");
+						   "      FROM pg_catalog.pg_publication_sequences s\n"
+						   "      WHERE s.pubname IN (");
 	first = true;
 	foreach_ptr(String, pubname, publications)
 	{
@@ -2470,7 +2468,7 @@ fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
 		Assert(!isnull);
 
 		rv = makeRangeVar(nspname, relname, -1);
-		tablelist = lappend(tablelist, rv);
+		seqlist = lappend(seqlist, rv);
 
 		ExecClearTuple(slot);
 	}
@@ -2478,7 +2476,7 @@ fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return seqlist;
 }
 
 /*
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 6770e26..f8dd93a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,10 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
 	},
 	{
-		"SequencesyncWorkerMain", SequencesyncWorkerMain
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 2451eca..4ab470f 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -276,18 +276,17 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 LogicalRepWorker *
 logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
 {
-	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
 	/* Search for attached worker for a given subscription id. */
-	for (i = 0; i < max_logical_replication_workers; i++)
+	for (int i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
 		/* Skip non sequence sync workers. */
-		if (!isSequencesyncWorker(w))
+		if (!isSequenceSyncWorker(w))
 			continue;
 
 		if (w->in_use && w->subid == subid && (only_running && w->proc))
@@ -331,15 +330,13 @@ logicalrep_workers_find(Oid subid, bool only_running)
 static LogicalRepWorker *
 logicalrep_apply_worker_find(Oid subid, bool only_running)
 {
-	int			i;
-
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	for (i = 0; i < max_logical_replication_workers; i++)
+	for (int i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isApplyWorker(w) && w->subid == subid && (only_running && w->proc))
+		if (isApplyWorker(w) && w->subid == subid && only_running && w->proc)
 		{
 			LWLockRelease(LogicalRepWorkerLock);
 			return w;
@@ -545,7 +542,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -554,7 +551,7 @@ retry:
 			break;
 
 		case WORKERTYPE_SEQUENCESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequencesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication sequencesync worker for subscription %u",
 					 subid);
@@ -941,7 +938,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (isTableSyncWorker(w) && w->subid == subid)
 			res++;
 	}
 
@@ -1392,7 +1389,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 92980e8..0ba8c1a 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -109,8 +109,8 @@ copy_sequence(WalReceiverConn *conn, Relation rel)
 	if (res->status != WALRCV_OK_TUPLES)
 		ereport(ERROR,
 				(errcode(ERRCODE_CONNECTION_FAILURE),
-				 errmsg("could not fetch sequence info for table \"%s.%s\" from publisher: %s",
-						nspname, RelationGetRelationName(rel), res->err)));
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
 
 	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
 	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
@@ -123,12 +123,11 @@ copy_sequence(WalReceiverConn *conn, Relation rel)
 	Assert(!isnull);
 	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
 	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
 
 	ExecDropSingleTupleTableSlot(slot);
 	walrcv_clear_result(res);
 
-	Assert(relkind == RELKIND_SEQUENCE);
-
 	/*
 	 * Logical replication of sequences is based on decoding WAL records,
 	 * describing the "next" state of the sequence the current state in the
@@ -152,12 +151,12 @@ copy_sequence(WalReceiverConn *conn, Relation rel)
  * Start syncing the sequences in the sync worker.
  */
 static void
-LogicalRepSyncSequences()
+LogicalRepSyncSequences(void)
 {
 	char	   *err;
 	bool		must_use_password;
-	List *sequences;
-	char	   slotname[NAMEDATALEN];
+	List 	   *sequences;
+	char		slotname[NAMEDATALEN];
 	AclResult	aclresult;
 	UserContext ucxt;
 	bool		run_as_owner  = false;
@@ -169,8 +168,7 @@ LogicalRepSyncSequences()
 
 	/* Get the sequences that should be synchronized. */
 	StartTransactionCommand();
-	sequences = GetSubscriptionSequences(subid,
-										 SUBREL_STATE_INIT);
+	sequences = GetSubscriptionSequences(subid, SUBREL_STATE_INIT);
 	CommitTransactionCommand();
 
 	/* Is the use of a password mandatory? */
@@ -197,7 +195,7 @@ LogicalRepSyncSequences()
 	seq_count = list_length(sequences);
 	foreach_ptr(SubscriptionRelState, seqinfo, sequences)
 	{
-		Relation	sequencerel;
+		Relation	sequence_rel;
 		XLogRecPtr	sequence_lsn;
 		int			next_seq;
 
@@ -206,7 +204,7 @@ LogicalRepSyncSequences()
 		if (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH == 0)
 			StartTransactionCommand();
 
-		sequencerel = table_open(seqinfo->relid, RowExclusiveLock);
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
 
 		/*
 		 * Make sure that the copy command runs as the sequence owner, unless the
@@ -214,18 +212,18 @@ LogicalRepSyncSequences()
 		 */
 		run_as_owner = MySubscription->runasowner;
 		if (!run_as_owner)
-			SwitchToUntrustedUser(sequencerel->rd_rel->relowner, &ucxt);
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
 
 		/*
 		 * Check that our sequence sync worker has permission to insert into the
 		 * target sequence.
 		 */
-		aclresult = pg_class_aclcheck(RelationGetRelid(sequencerel), GetUserId(),
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
 									ACL_INSERT);
 		if (aclresult != ACLCHECK_OK)
 			aclcheck_error(aclresult,
-						get_relkind_objtype(sequencerel->rd_rel->relkind),
-						RelationGetRelationName(sequencerel));
+						get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						RelationGetRelationName(sequence_rel));
 
 		/*
 		 * COPY FROM does not honor RLS policies.  That is not a problem for
@@ -234,28 +232,30 @@ LogicalRepSyncSequences()
 		 * circumvent RLS.  Disallow logical replication into RLS enabled
 		 * relations for such roles.
 		 */
-		if (check_enable_rls(RelationGetRelid(sequencerel), InvalidOid, false) == RLS_ENABLED)
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("user \"%s\" cannot replicate into relation with row-level security enabled: \"%s\"",
 							GetUserNameFromId(GetUserId(), true),
-							RelationGetRelationName(sequencerel)));
+							RelationGetRelationName(sequence_rel)));
 
-		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequencerel);
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
 
 		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
 								   sequence_lsn);
 
-		table_close(sequencerel, NoLock);
+		table_close(sequence_rel, NoLock);
 
 		next_seq = curr_seq + 1;
 		if (((next_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) || next_seq == seq_count)
 		{
 			/* LOG all the sequences synchronized during current batch. */
 			int i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);
+
 			for (; i <= curr_seq; i++)
 			{
 				SubscriptionRelState *done_seq;
+
 				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences, i));
 				ereport(LOG,
 						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
@@ -274,7 +274,7 @@ LogicalRepSyncSequences()
 
 /*
  * Execute the initial sync with error handling. Disable the subscription,
- * if it's required.
+ * if required.
  *
  * Allocate the slot name in long-lived context on return. Note that we don't
  * handle FATAL errors which are probably because of system resource error and
@@ -310,9 +310,9 @@ start_sequence_sync()
 	PG_END_TRY();
 }
 
-/* Logical Replication Sequencesync worker entry point */
+/* Logical Replication sequence sync worker entry point */
 void
-SequencesyncWorkerMain(Datum main_arg)
+SequenceSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index a15b6cd..01f5a85 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -164,14 +164,14 @@ finish_sync_worker(bool istable)
 					   get_rel_name(MyLogicalRepWorker->relid)));
 	else
 		ereport(LOG,
-				errmsg("logical replication sequences synchronization worker for subscription \"%s\" has finished",
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
 					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
-	/* No need to set the failure time in case of a clean exit */
+	/* No need to set the sequence failure time when it is a clean exit */
 	if (!istable)
 		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
 
@@ -683,13 +683,13 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
  * synchronization for them.
  *
  * If there is a sequence synchronization worker running already, no need to
- * start a sequence synchronization in this case. The existing sequence
- * sync worker will synchronize the sequences. If there are still any sequences
- * to be synced after the sequence sync worker exited, then we new sequence
- * sync worker can be started in the next iteration. To prevent starting the
- * sequence sync worker at a high frequency after a failure, we store its last
- * start time. We start the sync worker for the same relation after waiting
- * at least wal_retrieve_retry_interval.
+ * start a new one; the existing sequence sync worker will synchronize all the
+ * sequences. If there are still any sequences to be synced after the sequence
+ * sync worker exited, then a new sequence sync worker can be started in the
+ * next iteration. To prevent starting the sequence sync worker at a high
+ * frequency after a failure, we store its last failure time. We start the sync
+ * worker for the same relation after waiting at least
+ * wal_retrieve_retry_interval.
  */
 static void
 process_syncing_sequences_for_apply()
@@ -702,7 +702,7 @@ process_syncing_sequences_for_apply()
 	FetchTableStates(&started_tx);
 
 	/*
-	 * Start sequence sync worker if there is no sequence sync worker running.
+	 * Start sequence sync worker if there is not one already.
 	 */
 	foreach_ptr(SubscriptionRelState, rstate, table_states_not_ready)
 	{
@@ -720,22 +720,19 @@ process_syncing_sequences_for_apply()
 			continue;
 
 		/*
-		 * Check if there is a sequence worker running?
+		 * Check if there is a sequence worker already running?
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
 															true);
-		/*
-		 * If there is a sequence sync worker, the sequence sync worker
-		 * will handle sync of this sequence.
-		 */
 		if (syncworker)
 		{
 			/* Now safe to release the LWLock */
 			LWLockRelease(LogicalRepWorkerLock);
 			break;
 		}
+
 		else
 		{
 			/*
@@ -750,13 +747,12 @@ process_syncing_sequences_for_apply()
 
 			/*
 			 * If there are free sync worker slot(s), start a new sequence sync
-			 * worker to sync the sequences and break from the loop, as this
-			 * sequence sync worker will take care of synchronizing all the
-			 * sequences that are in init state.
+			 * worker, and break from the loop.
 			 */
 			if (nsyncworkers < max_sync_workers_per_subscription)
 			{
 				TimestampTz now = GetCurrentTimestamp();
+
 				if (!MyLogicalRepWorker->sequencesync_failure_time ||
 					TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
 											   now, wal_retrieve_retry_interval))
@@ -804,14 +800,13 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			process_syncing_sequences_for_apply();
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
 			break;
 
-		/* Sequence sync is not expected to come here */
 		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
 			Assert(0);
-			/* not reached, here to make compiler happy */
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -1837,7 +1832,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d0b0715..63dff38 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,10 +489,9 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
-		/* Sequence sync is not expected to come here */
 		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
 			Assert(0);
-			/* not reached, here to make compiler happy */
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -4639,7 +4638,7 @@ InitializeLogRepWorker(void)
 						get_rel_name(MyLogicalRepWorker->relid))));
 	else if (am_sequencesync_worker())
 		ereport(LOG,
-				(errmsg("logical replication sequences synchronization worker for subscription \"%s\" has started",
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
 						MySubscription->name)));
 	else
 		ereport(LOG,
@@ -4689,7 +4688,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 								  invalidate_syncing_table_states,
 								  (Datum) 0);
 
-	if (isSequencesyncWorker(MyLogicalRepWorker))
+	if (isSequenceSyncWorker(MyLogicalRepWorker))
 		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index f380c1b..47a3326 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,8 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
-extern void SequencesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 3701b15..502ecef 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -338,21 +338,21 @@ extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
-#define isSequencesyncWorker(worker) ((worker)->in_use && \
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
 									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
 am_sequencesync_worker(void)
 {
-	return isSequencesyncWorker(MyLogicalRepWorker);
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
index 94bf83a..7272efa 100644
--- a/src/test/subscription/t/034_sequences.pl
+++ b/src/test/subscription/t/034_sequences.pl
@@ -1,5 +1,5 @@
 
-# Copyright (c) 2021, PostgreSQL Global Development Group
+# Copyright (c) 2024, PostgreSQL Global Development Group
 
 # This tests that sequences are synced correctly to the subscriber
 use strict;
@@ -13,101 +13,109 @@ my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
 $node_publisher->init(allows_streaming => 'logical');
 $node_publisher->start;
 
-# Create subscriber node
+# Initialize subscriber node
 my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
 $node_subscriber->init(allows_streaming => 'logical');
 $node_subscriber->start;
 
-# Create some preexisting content on publisher
+# Setup structure on the publisher
 my $ddl = qq(
 	CREATE TABLE seq_test (v BIGINT);
-	CREATE SEQUENCE s;
+	CREATE SEQUENCE s1;
 );
-
-# Setup structure on the publisher
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Create some the same structure on subscriber, and an extra sequence that
+# Setup the same structure on the subscriber, plus some extra sequences that
 # we'll create on the publisher later
 $ddl = qq(
 	CREATE TABLE seq_test (v BIGINT);
-	CREATE SEQUENCE s;
+	CREATE SEQUENCE s1;
 	CREATE SEQUENCE s2;
 	CREATE SEQUENCE s3;
 );
-
 $node_subscriber->safe_psql('postgres', $ddl);
 
-# Setup logical replication
-my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
-$node_publisher->safe_psql('postgres',
-	"CREATE PUBLICATION seq_pub FOR ALL SEQUENCES");
-
 # Insert initial test data
 $node_publisher->safe_psql(
 	'postgres', qq(
 	-- generate a number of values using the sequence
-	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+	INSERT INTO seq_test SELECT nextval('s1') FROM generate_series(1,100);
 ));
 
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION seq_pub FOR ALL SEQUENCES");
 $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
 );
 
-# Wait for initial sync to finish as well
+# Wait for initial sync to finish
 my $synced_query =
   "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
 $node_subscriber->poll_query_until('postgres', $synced_query)
   or die "Timed out while waiting for subscriber to synchronize data";
 
-# Check the data on subscriber
+#
+# TEST:
+#
+# Check the initial data on subscriber
+#
 my $result = $node_subscriber->safe_psql(
 	'postgres', qq(
-	SELECT * FROM s;
+	SELECT * FROM s1;
 ));
-
 is($result, '132|0|t', 'initial test data replicated');
 
-# create a new sequence, it should be synced
+#
+# TEST:
+#
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+#
+
+# create a new sequence 's2', and update existing sequence 's1'
 $node_publisher->safe_psql(
 	'postgres', qq(
 	CREATE SEQUENCE s2;
 	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
-));
 
-# changes to existing sequences should not be synced
-$node_publisher->safe_psql(
-	'postgres', qq(
-	INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+    -- Existing sequence
+	INSERT INTO seq_test SELECT nextval('s1') FROM generate_series(1,100);
 ));
 
-# Refresh publication after create a new sequence and updating existing
-# sequence.
+# do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
 $result = $node_subscriber->safe_psql(
 	'postgres', qq(
 	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION
 ));
-
 $node_subscriber->poll_query_until('postgres', $synced_query)
   or die "Timed out while waiting for subscriber to synchronize data";
 
-# Check the data on subscriber
+# check - existing sequence is not synced
 $result = $node_subscriber->safe_psql(
 	'postgres', qq(
-	SELECT * FROM s;
+	SELECT * FROM s1;
 ));
+is($result, '132|0|t', 'REFRESH PUBLICATION does not sync existing sequence');
 
-is($result, '132|0|t', 'initial test data replicated');
-
+# check - newly published sequence is synced
 $result = $node_subscriber->safe_psql(
 	'postgres', qq(
 	SELECT * FROM s2;
 ));
+is($result, '132|0|t', 'REFRESH PUBLICATION will sync newly published sequence');
 
-is($result, '132|0|t', 'initial test data replicated');
+#
+# TEST:
+#
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+#
 
-# Changes of both new and existing sequence should be synced after REFRESH
-# PUBLICATION SEQUENCES.
+# create a new sequence 's3', and update the existing sequence 's2'
 $node_publisher->safe_psql(
 	'postgres', qq(
 	CREATE SEQUENCE s3;
@@ -117,8 +125,7 @@ $node_publisher->safe_psql(
 	INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
 ));
 
-# Refresh publication sequences after create new sequence and updating existing
-# sequence.
+# do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
 $result = $node_subscriber->safe_psql(
 	'postgres', qq(
 	ALTER SUBSCRIPTION seq_sub REFRESH PUBLICATION SEQUENCES
@@ -127,19 +134,23 @@ $result = $node_subscriber->safe_psql(
 $node_subscriber->poll_query_until('postgres', $synced_query)
   or die "Timed out while waiting for subscriber to synchronize data";
 
-# Check the data on subscriber
+# check - existing sequences are syned
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT * FROM s1;
+));
+is($result, '231|0|t', 'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
 $result = $node_subscriber->safe_psql(
 	'postgres', qq(
 	SELECT * FROM s2;
 ));
+is($result, '231|0|t', 'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
 
-is($result, '231|0|t', 'initial test data replicated');
-
+# check - newly published sequence is synced
 $result = $node_subscriber->safe_psql(
 	'postgres', qq(
 	SELECT * FROM s3;
 ));
-
-is($result, '132|0|t', 'initial test data replicated');
+is($result, '132|0|t', 'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
 
 done_testing();
#74Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#69)
Re: Logical Replication of sequences

Hi,

I was reading back through this thread to find out how the proposed new
command for refreshing sequences, came about. The patch 0705 introduces a
new command syntax for ALTER SUBSCRIPTION ... REFRESH SEQUENCES

So now there are 2 forms of subscription refresh.

#1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option [=
value] [, ... ] ) ]

#2. ALTER SUBSCRIPTION name REFRESH SEQUENCES

~~~~

IMO, that separation seems complicated. It leaves many questions like:
* It causes a bit of initial confusion. e.g. When I saw the REFRESH
SEQUENCES I first assumed that was needed because sequences were
not covered by the existing REFRESH PUBLICATION
* Why wasn't command #2 called ALTER SUBSCRIPTION REFRESH PUBLICATION
SEQUENCES? E.g. missing keyword PUBLICATION. It seems inconsistent.
* I expect sequence values can become stale pretty much immediately after
command #1, so the user will want to use command #2 anyway...
* ... but if command #2 also does add/remove changed sequences same as
command #1 then what benefit was there of having the command #1 for
sequences?
* There is a separation of sequences (from tables) in command #2 but there
is no separation currently possible in command #1. It seemed inconsistent.

~~~

IIUC some of the goals I saw in the thread are to:
* provide a way to fetch and refresh sequences that also keeps behaviors
(e.g. copy_data etc.) consistent with the refresh of subscription tables
* provide a way to fetch and refresh *only* sequences

I felt you could just enhance the existing refresh command syntax (command
#1), instead of introducing a new one it would be simpler and it would
still meet those same objectives.

Synopsis:
ALTER SUBSCRIPTION name REFRESH PUBLICATION [TABLES | SEQUENCES | ALL] [
WITH ( refresh_option [= value] [, ... ] ) ]

My only change is the introduction of the optional "[TABLES | SEQUENCES |
ALL]" clause.

I believe that can do everything your current patch does, plus more:
* Can refresh *only* TABLES if that is what you want (current patch 0705
cannot do this)
* Can refresh *only* SEQUENCES (same as current patch 0705 command #2)
* Has better integration with refresh options like "copy_data" (current
patch 0705 command #2 doesn't have options)
* Existing REFRESH PUBLICATION syntax still works as-is. You can decide
later what is PG18 default if the "[TABLES | SEQUENCES | ALL]" is omitted.

~~~

More examples using proposed syntax.

ex1.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = false)
- same as PG17 functionality for ALTER SUBSCRIPTION sub REFRESH PUBLICATION
WITH (copy_data = false)

ex2.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = true)
- same as PG17 functionality for ALTER SUBSCRIPTION sub REFRESH PUBLICATION
WITH (copy_data = true)

ex3.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data =
false)
- this adds/removes only sequences to pg_subscription_rel but doesn't
update their sequence values

ex4.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = true)
- this adds/removes only sequences to pg_subscription_rel and also updates
all sequence values.
- this is equivalent behaviour of what your current 0705 patch is doing for
command #2, ALTER SUBSCRIPTION sub REFRESH SEQUENCES

ex5.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION ALL WITH (copy_data = false)
- this is equivalent behaviour of what your current 0705 patch is doing for
command #1, ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data =
false)

ex6.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION ALL WITH (copy_data = true)
- this adds/removes tables and sequences and updates all table initial data
sequence values.- I think it is equivalent to your current 0705 patch doing
command #1 ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data =
true), followed by another command #2 ALTER SUBSCRIPTION sub REFRESH
SEQUENCES

ex7.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES
- Because default copy_data is true you do not need to specify options, so
this is the same behaviour as your current 0705 patch command #2, ALTER
SUBSCRIPTION sub REFRESH SEQUENCES.

~~~

I hope this post was able to demonstrate that by enhancing the existing
command:
- it is less tricky to understand the separate command distinctions
- there is more functionality/flexibility possible
- there is better integration with the refresh options like copy_data
- behaviour for tables/sequences is more consistent

Anyway, it is just my opinion. Maybe there are some pitfalls I'm unaware of.

Thoughts?

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#75vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#71)
3 attachment(s)
Re: Logical Replication of sequences

On Wed, 10 Jul 2024 at 09:34, Peter Smith <smithpb2250@gmail.com> wrote:

On Fri, Jul 5, 2024 at 9:58 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 4 Jul 2024 at 12:44, Peter Smith <smithpb2250@gmail.com> wrote:

1.
Should there be some new test for the view? Otherwise, AFAICT this
patch has no tests that will exercise the new function
pg_get_publication_sequences.

pg_publication_sequences view uses pg_get_publication_sequences which
will be tested with 3rd patch while creating subscription/refreshing
publication sequences. I felt it is ok not to have a test here.

OTOH, if there had been such a test here then the ("sequence = NIL")
bug in patch 0002 code would have been caught earlier in patch 0002
testing instead of later in patch 0003 testing. In general, I think
each patch should be self-contained w.r.t. to testing all of its new
code, but if you think another test here is overkill then I am fine
with that too.

Moved these changes to 0003 patch where it is actually required.

//////////

Meanwhile, here are my review comments for patch v20240705-0002

All the comments are fixed and the attached v20240720 version patch
has the changes for the same.

Regards,
Vignesh

Attachments:

v20240720-0001-Introduce-pg_sequence_state-and-SetSequenc.patchtext/x-patch; charset=US-ASCII; name=v20240720-0001-Introduce-pg_sequence_state-and-SetSequenc.patchDownload
From 41b125dc808a148902b090cf37c497b25acff83f Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240720 1/3] Introduce pg_sequence_state and
 SetSequenceLastValue functions for enhanced sequence management

This patch introduces a couple of new functions: pg_sequence_state function
allows retrieval of sequence values including LSN. The SetSequenceLastValue
function enables updating sequences with specified values.
---
 doc/src/sgml/func.sgml                 |  27 ++++
 src/backend/commands/sequence.c        | 169 +++++++++++++++++++++++--
 src/include/catalog/pg_proc.dat        |   8 ++
 src/include/commands/sequence.h        |   1 +
 src/test/regress/expected/sequence.out |  12 ++
 src/test/regress/sql/sequence.sql      |   2 +
 6 files changed, 211 insertions(+), 8 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index fd5699f4d8..958a6c2a1d 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19576,6 +19576,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ()
+        <returnvalue>record</returnvalue>
+        ( <parameter>lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>lsn</literal> is the
+        page LSN of the sequence, <literal>last_value</literal> is the most
+        recent value returned by <function>nextval</function> in the current
+        session, <literal>log_cnt</literal> shows how many fetches remain
+        before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 9f28d40466..bff990afa7 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,83 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ *
+ * Note: This function resembles do_setval but does not include the locking and
+ * verification steps, as those are managed in a slightly different manner for
+ * logical replication.
+ */
+void
+SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* check the comment above nextval_internal()'s equivalent call. */
+	if (RelationNeedsWAL(seqrel))
+	{
+		GetTopTransactionId();
+
+		if (XLogLogicalInfoActive())
+			GetCurrentTransactionId();
+	}
+
+	/* ready to change the on-disk (or really, in-buffer) tuple */
+	START_CRIT_SECTION();
+
+	seq->last_value = new_last_value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	MarkBufferDirty(buf);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(seqrel))
+	{
+		xl_seq_rec      xlrec;
+		XLogRecPtr      recptr;
+		Page            page = BufferGetPage(buf);
+
+		XLogBeginInsert();
+		XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+		xlrec.locator = seqrel->rd_locator;
+
+		XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+		XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+		recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buf);
+
+	/*
+	 * Clear local cache so that we don't think we have cached numbers.
+	 * Note that we do not change the currval() state.
+	 */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +555,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +637,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +766,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +1062,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1262,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1287,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1807,7 +1894,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1822,6 +1909,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 73d9cf8582..1a949966e0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..003f2e3413 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 2b47b7796b..cbcd65f499 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 674f5f1f66..5fcb36341d 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240720-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240720-0003-Enhance-sequence-synchronization-during-su.patchDownload
From b0c00539c11bfae596bcf354d4ccd0a29996fb9f Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 19 Jun 2024 14:58:14 +0530
Subject: [PATCH v20240720 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
	ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  33 +-
 doc/src/sgml/system-views.sgml                |  67 ++++
 src/backend/catalog/pg_publication.c          |  46 +++
 src/backend/catalog/pg_subscription.c         |  35 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/subscriptioncmds.c       | 204 ++++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    | 100 ++++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 356 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 163 +++++++-
 src/backend/replication/logical/worker.c      |  23 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription.h         |   6 +
 src/include/catalog/pg_subscription_rel.h     |  10 +-
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  23 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/t/034_sequences.pl      | 156 ++++++++
 src/tools/pgindent/typedefs.list              |   2 +
 28 files changed, 1206 insertions(+), 84 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 3dec0b7cfe..2bb4660336 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index ccdd24312b..1d3bc6a285 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1984,8 +1984,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 476f195622..ba8c2b176f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -159,6 +160,19 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      It also fetches the missing sequence information from the publisher and
+      synchronize the sequence data for newly added sequences with the
+      publisher. This will start synchronizing of sequences that were added to
+      the subscribed-to publications since <link linkend="sql-createsubscription">
+      <command>CREATE SUBSCRIPTION</command></link> or the last invocation of
+      <command>REFRESH PUBLICATION</command>. Additionally, it will remove any
+      sequences that are no longer part of the publication from the
+      <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      system catalog. Sequences that have already been synchronized will not be
+      re-synchronized.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
       refresh operation.  The supported options are:
@@ -168,9 +182,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and sequences
+          in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
@@ -194,6 +208,19 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequences information from publisher and re-synchronize the
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only syncronizes the newly added sequences, this option will also
+      re-synchronize the sequence data for sequences that were previously added.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index bdc34cf94e..b893fc2d90 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2147,6 +2152,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..5610c0749c 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -493,12 +494,19 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * If rel_type is SUB_REL_KIND_SEQUENCE, get only the sequences. If rel_type is
+ * SUB_REL_KIND_TABLE, get only the tables. If rel_type is SUB_REL_KIND_ALL,
+ * get both tables and sequences.
+ * If not_all_relations is true for SUB_REL_KIND_TABLE and SUB_REL_KIND_ALL,
+ * return only the relations that are not in a ready state, otherwise return all
+ * the relations of the subscription. If not_all_relations is true for
+ * SUB_REL_KIND_SEQUENCE, return only the sequences that are in init state,
+ * otherwise return all the sequences of the subscription.
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, SubscriptionRelKind rel_type,
+						 bool not_all_relations)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -514,11 +522,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	/* Get the relations that are not in ready state */
+	if (rel_type != SUB_REL_KIND_SEQUENCE && not_all_relations)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
 					CharGetDatum(SUBREL_STATE_READY));
+	/* Get the sequences that are in init state */
+	else if (rel_type == SUB_REL_KIND_SEQUENCE && not_all_relations)
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(SUBREL_STATE_INIT));
 
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, nkeys, skey);
@@ -529,8 +544,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		subreltype;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		subreltype = get_rel_relkind(subrel->srrelid);
+
+		/* If only tables were requested, skip the sequences */
+		if (rel_type == SUB_REL_KIND_TABLE && subreltype == RELKIND_SEQUENCE)
+			continue;
+
+		/* If only sequences were requested, skip the tables */
+		if (rel_type == SUB_REL_KIND_SEQUENCE && subreltype != RELKIND_SEQUENCE)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 16d83b3253..d23901a5e2 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -102,6 +102,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -760,6 +761,8 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List	   *sequences;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -790,6 +793,22 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 										InvalidXLogRecPtr, true);
 			}
 
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										 rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
@@ -856,12 +875,25 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * If the copy_data parameter is true, the function will set the state
+ * to "init"; otherwise, it will set the state to "ready". When the
+ * validate_publications is provided with a publication list, the function
+ * checks that the  specified publications exist on the publisher. If
+ * refresh_all_sequences is  true, it will mark all sequences with "init" state
+ * for re-synchronization; otherwise, only the newly added relations and
+ * sequences will be updated based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications,
+						  bool refresh_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -894,14 +926,21 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 	PG_TRY();
 	{
+		SubscriptionRelKind reltype = refresh_all_sequences ?
+			SUB_REL_KIND_SEQUENCE : SUB_REL_KIND_ALL;
+
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (reltype == SUB_REL_KIND_ALL)
+			/* Get the table list from publisher. */
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names, fetch_sequence_list(wrconn, sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, reltype, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -920,9 +959,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (!refresh_all_sequences)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -982,6 +1022,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1003,30 +1044,37 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
+				RemoveSubscriptionRel(sub->oid, relid);
+
 				sub_remove_rels[remove_rel_len].relid = relid;
 				sub_remove_rels[remove_rel_len++].state = state;
 
-				RemoveSubscriptionRel(sub->oid, relid);
-
-				logicalrep_worker_stop(sub->oid, relid);
+				/*
+				 * Since one sequence sync workers synchronizes all the
+				 * sequences, stop the worker only if relation kind is not
+				 * sequence.
+				 */
+				if (relkind != RELKIND_SEQUENCE)
+					logicalrep_worker_stop(sub->oid, relid);
 
 				/*
 				 * For READY state, we would have already dropped the
 				 * tablesync origin.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)
 				{
 					char		originname[NAMEDATALEN];
 
 					/*
 					 * Drop the tablesync's origin tracking if exists.
 					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * It is possible that the origin is not yet created
+					 * for tablesync worker, this can happen for the
+					 * states before SUBREL_STATE_FINISHEDCOPY. The
+					 * tablesync worker or apply worker can also
+					 * concurrently try to drop the origin and by this
+					 * time the origin might be already removed. For these
+					 * reasons, passing missing_ok = true.
 					 */
 					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
 													   sizeof(originname));
@@ -1034,10 +1082,25 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name,
+										 get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table")));
+			}
+			/*
+			 * In case of REFRESH PUBLICATION SEQUENCES, the existing sequences
+			 * should be re-synchronized.
+			 */
+			else if (refresh_all_sequences)
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
 			}
 		}
 
@@ -1048,6 +1111,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1326,8 +1392,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1341,7 +1407,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, false);
 				}
 
 				break;
@@ -1381,8 +1447,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1400,13 +1466,27 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1441,7 +1521,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, false);
 
 				break;
 			}
@@ -1705,7 +1785,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, SUB_REL_KIND_TABLE, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2063,11 +2143,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2237,6 +2321,62 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "      FROM pg_catalog.pg_publication_sequences s\n"
+						   "      WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index ead629906e..a04cf2beb1 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10835,11 +10835,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..f8dd93a83a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 27c3a91fb7..17759c22c5 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -237,7 +237,8 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given
  * subscription id and relid.
  *
- * We are only interested in the leader apply worker or table sync worker.
+ * We are only interested in the leader apply worker, table sync worker and
+ * sequence sync worker.
  */
 LogicalRepWorker *
 logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
@@ -267,6 +268,38 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 	return res;
 }
 
+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
+{
+	LogicalRepWorker *res = NULL;
+
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	/* Search for attached worker for a given subscription id. */
+	for (int i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		/* Skip non sequence sync workers. */
+		if (!isSequenceSyncWorker(w))
+			continue;
+
+		if (w->in_use && w->subid == subid && (only_running && w->proc))
+		{
+			res = w;
+			break;
+		}
+	}
+
+	return res;
+}
+
 /*
  * Similar to logicalrep_worker_find(), but returns a list of all workers for
  * the subscription, instead of just one.
@@ -291,6 +324,26 @@ logicalrep_workers_find(Oid subid, bool only_running)
 	return res;
 }
 
+/*
+ * Return the pid of the apply worker for one that matches given
+ * subscription id.
+ */
+static LogicalRepWorker *
+logicalrep_apply_worker_find(Oid subid, bool only_running)
+{
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	for (int i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		if (isApplyWorker(w) && w->subid == subid && only_running && w->proc)
+			return w;
+	}
+
+	return NULL;
+}
+
 /*
  * Start new logical replication background worker, if possible.
  *
@@ -311,6 +364,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -318,11 +372,15 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - must be valid worker type
 	 * - tablesync workers are only ones to have relid
 	 * - parallel apply worker is the only kind of subworker
+	 * - sequencesync workers will not have relid
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
 	Assert(is_tablesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
 
+	if (is_sequencesync_worker)
+		Assert(!OidIsValid(relid));
+
 	ereport(DEBUG1,
 			(errmsg_internal("starting logical replication worker for subscription \"%s\"",
 							 subname)));
@@ -396,7 +454,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -483,7 +542,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -491,6 +550,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -809,6 +876,26 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time for the sequence sync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequence sync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+	worker = logicalrep_apply_worker_find(MyLogicalRepWorker->subid, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -857,7 +944,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (isTableSyncWorker(w) && w->subid == subid)
 			res++;
 	}
 
@@ -1308,7 +1395,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1351,6 +1438,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..7b1d071a81
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,356 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Fetch sequence data (current state) from the remote node, including
+ * the latest sequence value from the publisher and the Page LSN for the
+ * sequence.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {INT8OID, LSNOID};
+	int64		value = (Datum) 0;
+
+	initStringInfo(&cmd);
+
+	/*
+	 * In the event of crash we can lose (skip over) as many values as we
+	 * pre-logged. We might get duplicate values in this kind of scenarios. So
+	 * use (last_value + log_cnt) to avoid it.
+	 */
+	appendStringInfo(&cmd, "SELECT (last_value + log_cnt), page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		bool		isnull;
+
+		value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		*lsn = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * withe the retreived value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	sequence_value = fetch_remote_sequence_data(conn, remoteid, &lsn);
+
+	SetSequenceLastValue(RelationGetRelid(rel), sequence_value);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	/* Get the sequences that should be synchronized. */
+	StartTransactionCommand();
+	sequences = GetSubscriptionRelations(subid, SUB_REL_KIND_SEQUENCE, true);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequence sync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		/*
+		 * Verify whether the current batch of sequences is synchronized or if
+		 * there are no remaining sequences to synchronize.
+		 */
+		if ((((curr_seq + 1) % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			(curr_seq + 1) == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i <= curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+		curr_seq++;
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequence sync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..313e5eb357 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -132,16 +132,16 @@ typedef enum
 
 static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+static bool FetchTableStates(bool *started_tx, SubscriptionRelKind rel_type);
 
 static StringInfo copybuf = NULL;
 
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
@@ -157,15 +157,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* No need to set the sequence failure time when it is a clean exit */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -387,7 +396,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -430,7 +439,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchTableStates(&started_tx, SUB_REL_KIND_ALL);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -463,6 +472,17 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	foreach(lc, table_states_not_ready)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		char		relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind == RELKIND_SEQUENCE)
+			continue;
 
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
@@ -477,11 +497,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -660,6 +675,108 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence synchronization worker running already, no need to
+ * start a new one; the existing sequence sync worker will synchronize all the
+ * sequences. If there are still any sequences to be synced after the sequence
+ * sync worker exited, then a new sequence sync worker can be started in the
+ * next iteration. To prevent starting the sequence sync worker at a high
+ * frequency after a failure, we store its last failure time. We start the sync
+ * worker for the same relation after waiting at least
+ * wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription sequences here. */
+	FetchTableStates(&started_tx, SUB_REL_KIND_ALL);
+
+	/*
+	 * Start sequence sync worker if there is not one already.
+	 */
+	foreach_ptr(SubscriptionRelState, rstate, table_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		char		relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
+														  true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int			nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence
+			 * sync worker, and break from the loop.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				TimestampTz now = GetCurrentTimestamp();
+
+				if (!MyLogicalRepWorker->sequencesync_failure_time ||
+					TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+											   now, wal_retrieve_retry_interval))
+				{
+					MyLogicalRepWorker->sequencesync_failure_time = 0;
+					logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											 MyLogicalRepWorker->dbid,
+											 MySubscription->oid,
+											 MySubscription->name,
+											 MyLogicalRepWorker->userid,
+											 InvalidOid,
+											 DSM_HANDLE_INVALID);
+					break;
+				}
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -683,6 +800,12 @@ process_syncing_tables(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -1320,7 +1443,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1570,7 +1693,7 @@ copy_table_done:
  * then it is the caller's responsibility to commit it.
  */
 static bool
-FetchTableStates(bool *started_tx)
+FetchTableStates(bool *started_tx, SubscriptionRelKind rel_type)
 {
 	static bool has_subrels = false;
 
@@ -1596,7 +1719,7 @@ FetchTableStates(bool *started_tx)
 		}
 
 		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		rstates = GetSubscriptionRelations(MySubscription->oid, rel_type, true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -1709,7 +1832,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1717,7 +1840,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1735,7 +1858,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchTableStates(&started_tx, SUB_REL_KIND_TABLE);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index c0bda6269b..71a1b1a948 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4552,8 +4557,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4632,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4640,14 +4649,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4679,6 +4691,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  invalidate_syncing_table_states,
 								  (Datum) 0);
+
+	if (isSequenceSyncWorker(MyLogicalRepWorker))
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index da608d074b..5c9586a5b9 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 1a949966e0..e09e321ab1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11968,6 +11968,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 0aa14ec4a2..8c96f0ce72 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -159,6 +159,12 @@ typedef struct Subscription
 								 * specified origin */
 } Subscription;
 
+typedef struct SubscriptionSeqInfo
+{
+	Oid			seqid;
+	XLogRecPtr	lsn;
+} SubscriptionSeqInfo;
+
 /* Disallow streaming in-progress transactions. */
 #define LOGICALREP_STREAM_OFF 'f'
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..5584136a35 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,13 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef enum
+{
+	SUB_REL_KIND_TABLE,
+	SUB_REL_KIND_SEQUENCE,
+	SUB_REL_KIND_ALL,
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,6 +97,7 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, SubscriptionRelKind reltype,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 515aefd519..e364d9faa2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -240,6 +243,8 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
+extern LogicalRepWorker *logicalrep_sequence_sync_worker_find(Oid subid,
+															  bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running);
 extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
@@ -252,6 +257,10 @@ extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
+
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -325,15 +334,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4c789279e5..bd5efd5d27 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..8f4871de1e
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,156 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(
+	allows_streaming => 'logical',
+	checkpoint_timeout => '1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+# Subscriber should have last_value as last_value + log_cnt (100 + 32) value
+# from the publisher. Refer comments in fetch_remote_sequence_data for details.
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '132|0|t', 'initial test data replicated');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '132|0|t', 'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '132|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are syned
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '231|0|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '231|0|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '132|0|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 90326c6e53..a875c196a8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2772,7 +2772,9 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
+SubscriptionSeqInfo
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
 SupportRequestIndexCondition
-- 
2.34.1

v20240720-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240720-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From fb37ce7eb89cb0c24918b498882044a751552d31 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240720 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Additionally, a new system view, pg_publication_sequences, has been
introduced to list all sequences added to a publication. Furthermore,
enhancements to psql commands (\d and \dRp) now allow for better display
of publications containing specific sequences or sequences included in a
publication.

Note: This patch currently supports only the "ALL SEQUENCES" command.
Handling of commands such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  37 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 699 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..c9c1b925c5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,13 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that synchronizes all the sequences:
+<programlisting>
+CREATE PUBLICATION allsequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..ead629906e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *allbjects_list,
+											bool *alltables,
+											bool *allsequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10561,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10586,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10704,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19439,49 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to check if the options have been specified more than
+ * once and set alltables/allsequences.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *alltables,
+								bool *allsequences, core_yyscan_t yyscanner)
+{
+	bool		alltables_specified = false;
+	bool		allsequences_specified = false;
+
+	if (!all_objects_list)
+		return;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (alltables_specified)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			alltables_specified = true;
+			*alltables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (allsequences_specified)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			allsequences_specified = true;
+			*allsequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b8b1888bd3..2fbaf027a9 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4212,6 +4212,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4228,23 +4229,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4256,6 +4263,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4275,6 +4283,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4322,8 +4332,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..5054be0fd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..0f3f86b076 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples = 0;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			tuples = PQntuples(result);
+			/* Might be an empty set - that's ok */
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 18000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index d453e224d9..da608d074b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..3ea2224d89 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- Check describe sequence lists both the publications
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..8d553edaa4 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- Check describe sequence lists both the publications
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b4d7f9217c..90326c6e53 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2247,6 +2247,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

#76vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#72)
Re: Logical Replication of sequences

On Wed, 10 Jul 2024 at 13:46, Peter Smith <smithpb2250@gmail.com> wrote:

Here are a few comments for patch v20240705-0003.

(This is a WIP. I have only looked at the docs so far.)

======
doc/src/sgml/config.sgml

nitpick - max_logical_replication_workers: /and sequence
synchornization worker/and a sequence synchornization worker/

======
doc/src/sgml/logical-replication.sgml

nitpick - max_logical_replication_workers: re-order list of workers to
be consistent with other docs 1-apply,2-parallel,3-tablesync,4-seqsync

======
doc/src/sgml/ref/alter_subscription.sgml

1.
IIUC the existing "REFRESH PUBLICATION" command will fetch and sync
all new sequences, etc., and/or remove old ones no longer in the
publication. But current docs do not say anything at all about
sequences here. It should say something about sequence behaviour.

~~~

2.
For the existing "REFRESH PUBLICATION" there is a sub-option
"copy_data=true/false". Won't this need some explanation about how it
behaves for sequences? Or will there be another option
"copy_sequences=true/false".

~~~

3.
IIUC the main difference between REFRESH PUBLICATION and REFRESH
PUBLICATION SEQUENCES is that the 2nd command will try synchronize
with all the *existing* sequences to bring them to the same point as
on the publisher, but otherwise, they are the same command. If that is
correct understanding I don't think that distinction is made very
clear in the current docs.

~~~

nitpick - the synopsis is misplaced. It should not be between ENABLE
and DISABLE. I moved it. Also, it should say "REFRESH PUBLICATION
SEQUENCES" because that is how the new syntax is defined in gram.y

nitpick - REFRESH SEQUENCES. Renamed to "REFRESH PUBLICATION
SEQUENCES". And, shouldn't "from the publisher" say "with the
publisher"?

nitpick - changed the varlistentry "id".

======
99.
Please also see the attached diffs patch which implements any nitpicks
mentioned above.

All these comments are handled in the v20240720 version patch attached at [1]/messages/by-id/CALDaNm2vuO7Ya4QVTZKR9jY_mkFFcE_hKUJiXx4KUknPgGFjSg@mail.gmail.com.
[1]: /messages/by-id/CALDaNm2vuO7Ya4QVTZKR9jY_mkFFcE_hKUJiXx4KUknPgGFjSg@mail.gmail.com

Regards,
Vignesh

#77vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#73)
Re: Logical Replication of sequences

On Fri, 12 Jul 2024 at 08:22, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh. Here are the rest of my comments for patch v20240705-0003.
======
src/backend/catalog/pg_subscription.c

1. GetSubscriptionSequences

+/*
+ * Get the sequences for the subscription.
+ *
+ * The returned list is palloc'ed in the current memory context.
+ */

Is that comment right? The palloc seems to be done in
CacheMemoryContext, not in the current context.

This function is removed and GetSubscriptionRelations is being used instead.

~

2.
The code is very similar to the other function
GetSubscriptionRelations(). In fact I did not understand how the 2
functions know what they are returning:

E.g. how does GetSubscriptionRelations not return sequences too?
E.g. how does GetSubscriptionSequences not return relations too?

GetSubscriptionRelations can be used, so removed the
GetSubscriptionSequences function.

3. AlterSubscription_refresh

- logicalrep_worker_stop(sub->oid, relid);
+ /* Stop the worker if relation kind is not sequence*/
+ if (relkind != RELKIND_SEQUENCE)
+ logicalrep_worker_stop(sub->oid, relid);

Can you give more reasons in the comment why skip the stop for sequence worker?

~

nitpick - period and space in the comment

~~~

8. logicalrep_sequence_sync_worker_find

+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)

There are other similar functions for walking the workers array to
search for a worker. Instead of having different functions for
different cases, wouldn't it be cleaner to combine these into a single
function, where you pass a parameter (e.g. a mask of worker types that
you are interested in finding)?

I will address this in a future version once the patch has become more stable.

~~~

11.
- Assert(is_tablesync_worker == OidIsValid(relid));
+ Assert(is_tablesync_worker == OidIsValid(relid) ||
is_sequencesync_worker == OidIsValid(relid));

IIUC there is only a single sequence sync worker for handling all the
sequences. So, what does the 'relid' actually mean here when there are
multiple sequences?

Sequence sync workers will not have relid, modified the assert.

~~~

12. logicalrep_seqsyncworker_failuretime
12b.
Curious if this had to be a separate exit handler or if may this could
have been handled by the existing logicalrep_worker_onexit handler.
See also other review comments int this post about this area --
perhaps all this can be removed?

This function cannot be combined with logicalrep_worker_onexit as this
function should be called only in failure case and this exit handler
should be removed in case of success case.

This cannot be removed because of the following reason:
Consider the following situation: a sequence sync worker starts and
then encounters a failure while syncing sequences. At the same time, a
user initiates a "refresh publication sequences" operation. Given only
the start time, it's not possible to distinguish whether the sequence
sync worker failed or completed successfully. This is because the
"refresh publication sequences" operation would have re-added the
sequences, making it unclear whether the sync worker's failure or
success occurred.

14.
The reason for the addition logic "(last_value + log_cnt)" is not
obvious. I am guessing it might be related to code from
'nextval_internal' (fetch = log = fetch + SEQ_LOG_VALS;) but it is
complicated. It is unfortunate that the field 'log_cnt' seems hardly
commented anywhere at all.

Also, I am not 100% sure if I trust the logic in the first place. The
caller of this function is doing:
sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
/* sets the sequence with sequence_value */
SetSequenceLastValue(RelationGetRelid(rel), sequence_value);

Won't that mean you can get to a situation where subscriber-side
result of lastval('s') can be *ahead* from lastval('s') on the
publisher? That doesn't seem good.

Added comments for "last_value + log_cnt"
Yes it can be ahead in subscribers. This will happen because every
change of the sequence is not wal logged. It is WAL logged once in
SEQ_LOG_VALS. This was discussed earlier and the sequence value being
ahead was ok.
/messages/by-id/CA+TgmoaVLiKDD5vr1bzL-rxhMA37KCS_2xrqjbKVwGyqK+PCXQ@mail.gmail.com

15.
+ /*
+ * Logical replication of sequences is based on decoding WAL records,
+ * describing the "next" state of the sequence the current state in the
+ * relfilenode is yet to reach. But during the initial sync we read the
+ * current state, so we need to reconstruct the WAL record logged when we
+ * started the current batch of sequence values.
+ *
+ * Otherwise we might get duplicate values (on subscriber) if we failed
+ * over right after the sync.
+ */
+ sequence_value = fetch_sequence_data(conn, remoteid, &lsn);
+
+ /* sets the sequence with sequence_value */
+ SetSequenceLastValue(RelationGetRelid(rel), sequence_value);

(This is related to some earlier review comment #14 above). IMO all
this tricky commentary belongs in the function header of
"fetch_sequence_data", where it should be describing that function's
return value.

Moved it to fetch_sequence_data where pg_sequence_state is called to
avoid any confusion

17.
Also, where does the number 100 come from? Why not 1000? Why not 10?
Why have batching at all? Maybe there should be some comment to
describe the reason and the chosen value.

Added a comment for this. I will do one round of testing with few
values and see if this value needs to be changed. I will share it
later.

20.
+ char relkind;
+
+ if (!started_tx)
+ {
+ StartTransactionCommand();
+ started_tx = true;
+ }
+
+ relkind = get_rel_relkind(rstate->relid);
+ if (relkind == RELKIND_SEQUENCE)
+ continue;

I am wondering is it possible to put the relkind check can come
*before* the TX code here, because in case there are *only* sequences
then maybe every would be skipped and there would have been no need
for any TX at all in the first place.

We need to start the transaction before calling get_rel_relkind, else
it will assert in SearchCatCacheInternal. So Skipping this.

21.
+ if (!started_tx)
+ {
+ StartTransactionCommand();
+ started_tx = true;
+ }
+
+ relkind = get_rel_relkind(rstate->relid);
+ if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+ continue;

Wondering (like in review comment #20) if it is possible to swap those
because maybe there was no reason for any TX if the other condition
would always continue.

As transaction is required before calling get_rel_relkind, this cannot
be changed. So skipping this.

~~~

22.
+ if (nsyncworkers < max_sync_workers_per_subscription)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;

It seems to me that storing 'sequencesync_failure_time' logic may be
unnecessarily complicated. Can't the same "throttling" be achieved by
storing the synchronization worker 'start time' instead of 'fail
time', in which case then you won't have to mess around with
considering if the sync worker failed or just exited normally etc? You
might also be able to remove all the
logicalrep_seqsyncworker_failuretime() exit handler code.

Consider the following situation: a sequence sync worker starts and
then encounters a failure while syncing sequences. At the same time, a
user initiates a "refresh publication sequences" operation. Given only
the start time, it's not possible to distinguish whether the sequence
sync worker failed or completed successfully. This is because the
"refresh publication sequences" operation would have re-added the
sequences, making it unclear whether the sync worker's failure or
success occurred.

28b.
Can you explain why the expected sequence value its 132, because
AFAICT you only called nextval('s') 100 times, so why isn't it 100?
My guess is that it seems to be related to code in "nextval_internal"
(fetch = log = fetch + SEQ_LOG_VALS;) but it kind of defies
expectations of the test, so if it really is correct then it needs
commentary.

I felt adding comments for one of the tests should be enough, So I did
not add the comment for all of the tests.

Actually, I found other regression test code that deals with this:
-- log_cnt can be higher if there is a checkpoint just at the right
-- time, so just test for the expected range
SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM
foo_seq_new;

Do you have to do something similar? Or is this a bug? See my other
review comments for function fetch_sequence_data in sequencesync.c

The comments in nextval_internal says:
* If this is the first nextval after a checkpoint, we must force a new
* WAL record to be written anyway, else replay starting from the
* checkpoint would fail to advance the sequence past the logged values.
* In this case we may as well fetch extra values.

I have increased the checkpoint for this test, so this issue will not occur.

All the other comments were fixed and the same is available in the
v20240720 version attached at [1]/messages/by-id/CALDaNm2vuO7Ya4QVTZKR9jY_mkFFcE_hKUJiXx4KUknPgGFjSg@mail.gmail.com.

[1]: /messages/by-id/CALDaNm2vuO7Ya4QVTZKR9jY_mkFFcE_hKUJiXx4KUknPgGFjSg@mail.gmail.com

Regards,
Vignesh

#78Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#75)
1 attachment(s)
Re: Logical Replication of sequences

Here are some review comments for patch v20240720-0002.

======
1. Commit message:

1a.
The commit message is stale. It is still referring to functions and
views that have been moved to patch 0003.

1b.
"ALL SEQUENCES" is not a command. It is a clause of the CREATE
PUBLICATION command.

======
doc/src/sgml/ref/create_publication.sgml

nitpick - publication name in the example /allsequences/all_sequences/

======
src/bin/psql/describe.c

2. describeOneTableDetails

Although it's not the fault of this patch, this patch propagates the
confusion of 'result' versus 'res'. Basically, I did not understand
the need for the variable 'result'. There is already a "PGResult
*res", and unless I am mistaken we can just keep re-using that instead
of introducing a 2nd variable having almost the same name and purpose.

~

nitpick - comment case
nitpick - rearrange comment

======
src/test/regress/expected/publication.out

(see publication.sql)

======
src/test/regress/sql/publication.sql

nitpick - tweak comment

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240723_SEQ_0002.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240723_SEQ_0002.txtDownload
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index c9c1b92..7dcfe37 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -422,7 +422,7 @@ CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
   <para>
    Create a publication that synchronizes all the sequences:
 <programlisting>
-CREATE PUBLICATION allsequences FOR ALL SEQUENCES;
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
 </programlisting>
   </para>
  </refsect1>
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 0f3f86b..a92af54 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1851,7 +1851,7 @@ describeOneTableDetails(const char *schemaname,
 		}
 		PQclear(result);
 
-		/* print any publications */
+		/* Print any publications */
 		if (pset.sversion >= 180000)
 		{
 			int			tuples = 0;
@@ -1867,8 +1867,8 @@ describeOneTableDetails(const char *schemaname,
 			if (!result)
 				goto error_return;
 
-			tuples = PQntuples(result);
 			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
 			if (tuples > 0)
 			{
 				printTableAddFooter(&cont, _("Publications:"));
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3ea2224..6c573a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -259,7 +259,7 @@ Publications:
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
 RESET client_min_messages;
--- Check describe sequence lists both the publications
+-- check that describe sequence lists all publications the sequence belongs to
 \d+ pub_test.regress_pub_seq1
                      Sequence "pub_test.regress_pub_seq1"
   Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 8d553ed..ac77fe4 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -134,7 +134,7 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
 RESET client_min_messages;
 
--- Check describe sequence lists both the publications
+-- check that describe sequence lists all publications the sequence belongs to
 \d+ pub_test.regress_pub_seq1
 
 --- FOR ALL specifying both TABLES and SEQUENCES
#79vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#74)
Re: Logical Replication of sequences

On Tue, 16 Jul 2024 at 06:00, Peter Smith <smithpb2250@gmail.com> wrote:

Hi,

I was reading back through this thread to find out how the proposed new command for refreshing sequences, came about. The patch 0705 introduces a new command syntax for ALTER SUBSCRIPTION ... REFRESH SEQUENCES

So now there are 2 forms of subscription refresh.

#1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option [= value] [, ... ] ) ]

This is correct.

#2. ALTER SUBSCRIPTION name REFRESH SEQUENCES

This is not correct, it is actually "ALTER SUBSCRIPTION name REFRESH
PUBLICATION SEQUENCES"

~~~~

IMO, that separation seems complicated. It leaves many questions like:
* It causes a bit of initial confusion. e.g. When I saw the REFRESH SEQUENCES I first assumed that was needed because sequences were not covered by the existing REFRESH PUBLICATION
* Why wasn't command #2 called ALTER SUBSCRIPTION REFRESH PUBLICATION SEQUENCES? E.g. missing keyword PUBLICATION. It seems inconsistent.

This is not correct, the existing implementation uses the key word
PUBLICATION, the actual syntax is:
"ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES"

* I expect sequence values can become stale pretty much immediately after command #1, so the user will want to use command #2 anyway...

Yes

* ... but if command #2 also does add/remove changed sequences same as command #1 then what benefit was there of having the command #1 for sequences?
* There is a separation of sequences (from tables) in command #2 but there is no separation currently possible in command #1. It seemed inconsistent.

This can be enhanced if required. It is not included as of now because
I'm not sure if there is such a use case in case of tables.

~~~

IIUC some of the goals I saw in the thread are to:
* provide a way to fetch and refresh sequences that also keeps behaviors (e.g. copy_data etc.) consistent with the refresh of subscription tables
* provide a way to fetch and refresh *only* sequences

I felt you could just enhance the existing refresh command syntax (command #1), instead of introducing a new one it would be simpler and it would still meet those same objectives.

Synopsis:
ALTER SUBSCRIPTION name REFRESH PUBLICATION [TABLES | SEQUENCES | ALL] [ WITH ( refresh_option [= value] [, ... ] ) ]

My only change is the introduction of the optional "[TABLES | SEQUENCES | ALL]" clause.

I believe that can do everything your current patch does, plus more:
* Can refresh *only* TABLES if that is what you want (current patch 0705 cannot do this)
* Can refresh *only* SEQUENCES (same as current patch 0705 command #2)
* Has better integration with refresh options like "copy_data" (current patch 0705 command #2 doesn't have options)
* Existing REFRESH PUBLICATION syntax still works as-is. You can decide later what is PG18 default if the "[TABLES | SEQUENCES | ALL]" is omitted.

~~~

More examples using proposed syntax.

ex1.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = false)
- same as PG17 functionality for ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = false)

ex2.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = true)
- same as PG17 functionality for ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = true)

ex3.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = false)
- this adds/removes only sequences to pg_subscription_rel but doesn't update their sequence values

ex4.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = true)
- this adds/removes only sequences to pg_subscription_rel and also updates all sequence values.
- this is equivalent behaviour of what your current 0705 patch is doing for command #2, ALTER SUBSCRIPTION sub REFRESH SEQUENCES

ex5.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION ALL WITH (copy_data = false)
- this is equivalent behaviour of what your current 0705 patch is doing for command #1, ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = false)

ex6.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION ALL WITH (copy_data = true)
- this adds/removes tables and sequences and updates all table initial data sequence values.- I think it is equivalent to your current 0705 patch doing
command #1 ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = true), followed by another command #2 ALTER SUBSCRIPTION sub REFRESH SEQUENCES

ex7.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES
- Because default copy_data is true you do not need to specify options, so this is the same behaviour as your current 0705 patch command #2, ALTER SUBSCRIPTION sub REFRESH SEQUENCES.

I felt ex:4 is equivalent to command #2 "ALTER SUBSCRIPTION name
REFRESH PUBLICATION SEQUENCES" and ex:3 just updates the
pg_subscription_rel. But I'm not seeing an equivalent for "ALTER
SUBSCRIPTION name REFRESH PUBLICATION with (copy_data = true)" which
will identify and remove the stale entries and add entries/synchronize
the sequences for the newly added sequences in the publisher.

Regards,
Vignesh

#80Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#75)
1 attachment(s)
Re: Logical Replication of sequences

Hi, here are some review comments for patch v20240720-0003.

This review is a WIP. This post is only about the docs (*.sgml) of patch 0003.

======
doc/src/sgml/ref/alter_subscription.sgml

1. REFRESH PUBLICATION and copy_data
nitpicks:
- IMO the "synchronize the sequence data" info was misleading because
synchronization should only occur when copy_data=true.
- I also felt it was strange to mention pg_subscription_rel for
sequences, but not for tables. I modified this part too.
- Then I moved the information about re/synchronization of sequences
into the "copy_data" part.
- And added another link to ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES

Anyway, in summary, I have updated this page quite a lot according to
my understanding. Please take a look at the attached nitpick for my
suggestions.

nitpick - /The supported options are:/The only supported option is:/

~~~

2. REFRESH PUBLICATION SEQUENCES
nitpick - tweaked the wording
nitpicK - typo /syncronizes/synchronizes/

======
3. catalogs.sgml

IMO something is missing in Section "1.55. pg_subscription_rel".

Currently, this page only talks of relations/tables, but I think it
should mention "sequences" here too, particularly since now we are
linking to here from ALTER SUBSCRIPTION when talking about sequences.

======
99.
Please see the attached diffs patch which implements any nitpicks
mentioned above.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240724_SEQ_0003_DOCS.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240724_SEQ_0003_DOCS.txtDownload
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index ba8c2b1..666d9b0 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -153,7 +153,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
@@ -161,29 +161,26 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
      </para>
 
      <para>
-      It also fetches the missing sequence information from the publisher and
-      synchronize the sequence data for newly added sequences with the
-      publisher. This will start synchronizing of sequences that were added to
-      the subscribed-to publications since <link linkend="sql-createsubscription">
-      <command>CREATE SUBSCRIPTION</command></link> or the last invocation of
-      <command>REFRESH PUBLICATION</command>. Additionally, it will remove any
-      sequences that are no longer part of the publication from the
-      <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
-      system catalog. Sequences that have already been synchronized will not be
-      re-synchronized.
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
      </para>
 
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data for tables and sequences
-          in the publications that are being subscribed to when the replication
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
           starts. The default is <literal>true</literal>.
          </para>
          <para>
@@ -191,6 +188,11 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
          <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
           <link linkend="sql-createsubscription-params-with-origin"><literal>origin</literal></link>
@@ -212,11 +214,11 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
     <listitem>
      <para>
-      Fetch missing sequences information from publisher and re-synchronize the
+      Fetch missing sequence information from the publisher, then re-synchronize
       sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
       <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
-      only syncronizes the newly added sequences, this option will also
-      re-synchronize the sequence data for sequences that were previously added.
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
      </para>
     </listitem>
    </varlistentry>
#81shveta malik
shveta.malik@gmail.com
In reply to: Peter Smith (#80)
Re: Logical Replication of sequences

On Wed, Jul 24, 2024 at 9:17 AM Peter Smith <smithpb2250@gmail.com> wrote:

I had a look at patches v20240720* (considering these as the latest
one) and tried to do some basic testing (WIP). Few comments:

1)
I see 'last_value' is updated wrongly after create-sub. Steps:

-----------
pub:
CREATE SEQUENCE myseq0 INCREMENT 5 START 100;
SELECT nextval('myseq0');
SELECT nextval('myseq0');
--last_value on pub is 105
select * from pg_sequences;
create publication pub1 for all tables, sequences;

Sub:
CREATE SEQUENCE myseq0 INCREMENT 5 START 100;
create subscription sub1 connection 'dbname=postgres host=localhost
user=shveta port=5433' publication pub1;

--check 'r' state is reached
select pc.relname, pr.srsubstate, pr.srsublsn from pg_subscription_rel
pr, pg_class pc where (pr.srrelid = pc.oid);

--check 'last_value', it shows some random value as 136
select * from pg_sequences;
-----------

2)
I can use 'for all sequences' only with 'for all tables' and can not
use it with the below. Shouldn't it be allowed?

create publication pub2 for tables in schema public, for all sequences;
create publication pub2 for table t1, for all sequences;

3)
preprocess_pub_all_objtype_list():
Do we need 'alltables_specified' and 'allsequences_specified' ? Can't
we make a repetition check using *alltables and *allsequences?

4) patch02's commit msg says : 'Additionally, a new system view,
pg_publication_sequences, has been introduced'
But it is not part of patch002.

thanks
Shveta

#82vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#80)
3 attachment(s)
Re: Logical Replication of sequences

On Wed, 24 Jul 2024 at 09:17, Peter Smith <smithpb2250@gmail.com> wrote:

Hi, here are some review comments for patch v20240720-0003.

This review is a WIP. This post is only about the docs (*.sgml) of patch 0003.

3. catalogs.sgml

IMO something is missing in Section "1.55. pg_subscription_rel".

Currently, this page only talks of relations/tables, but I think it
should mention "sequences" here too, particularly since now we are
linking to here from ALTER SUBSCRIPTION when talking about sequences.

Modified it to mention sequences too.

I have merged the rest of the nitpicks suggested by you.
The attached v20240724 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20240724-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240724-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From e859e9bb3c660453a2d08715ad5ed35e4a190ce4 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240724 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  37 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 699 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..7dcfe37ffe 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,13 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that synchronizes all the sequences:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..ead629906e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *allbjects_list,
+											bool *alltables,
+											bool *allsequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10561,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10586,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10704,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19439,49 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to check if the options have been specified more than
+ * once and set alltables/allsequences.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *alltables,
+								bool *allsequences, core_yyscan_t yyscanner)
+{
+	bool		alltables_specified = false;
+	bool		allsequences_specified = false;
+
+	if (!all_objects_list)
+		return;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (alltables_specified)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			alltables_specified = true;
+			*alltables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (allsequences_specified)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			allsequences_specified = true;
+			*allsequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b8b1888bd3..2fbaf027a9 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4212,6 +4212,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4228,23 +4229,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4256,6 +4263,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4275,6 +4283,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4322,8 +4332,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..5054be0fd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..a92af54905 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples = 0;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 18000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 891face1b6..be0ed1fc27 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b4d7f9217c..90326c6e53 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2247,6 +2247,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240724-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240724-0003-Enhance-sequence-synchronization-during-su.patchDownload
From 12e9874b0cc358282ac48c1cadc307c3e3095ca2 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 24 Jul 2024 11:24:57 +0530
Subject: [PATCH v20240724 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/catalogs.sgml                    |  13 +-
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  39 +-
 doc/src/sgml/system-views.sgml                |  67 ++++
 src/backend/catalog/pg_publication.c          |  46 +++
 src/backend/catalog/pg_subscription.c         |  35 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/subscriptioncmds.c       | 204 ++++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    | 100 ++++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 356 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 163 +++++++-
 src/backend/replication/logical/worker.c      |  23 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription.h         |   6 +
 src/include/catalog/pg_subscription_rel.h     |  10 +-
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  23 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/t/034_sequences.pl      | 156 ++++++++
 src/tools/pgindent/typedefs.list              |   2 +
 29 files changed, 1217 insertions(+), 92 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..19d04b107e 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8103,14 +8103,15 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
 
   <para>
    The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   state for each replicated tables and sequences in each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
+   This catalog only contains tables and sequences known to the subscription
+   after running either
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+   or <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
    PUBLICATION</command></link>.
   </para>
 
@@ -8145,7 +8146,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 3dec0b7cfe..2bb4660336 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index ccdd24312b..1d3bc6a285 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1984,8 +1984,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 6af6d0d2c8..d3029b9e91 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -153,30 +154,45 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -195,6 +211,19 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index bdc34cf94e..b893fc2d90 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2147,6 +2152,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..5610c0749c 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -493,12 +494,19 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * If rel_type is SUB_REL_KIND_SEQUENCE, get only the sequences. If rel_type is
+ * SUB_REL_KIND_TABLE, get only the tables. If rel_type is SUB_REL_KIND_ALL,
+ * get both tables and sequences.
+ * If not_all_relations is true for SUB_REL_KIND_TABLE and SUB_REL_KIND_ALL,
+ * return only the relations that are not in a ready state, otherwise return all
+ * the relations of the subscription. If not_all_relations is true for
+ * SUB_REL_KIND_SEQUENCE, return only the sequences that are in init state,
+ * otherwise return all the sequences of the subscription.
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, SubscriptionRelKind rel_type,
+						 bool not_all_relations)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -514,11 +522,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	/* Get the relations that are not in ready state */
+	if (rel_type != SUB_REL_KIND_SEQUENCE && not_all_relations)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
 					CharGetDatum(SUBREL_STATE_READY));
+	/* Get the sequences that are in init state */
+	else if (rel_type == SUB_REL_KIND_SEQUENCE && not_all_relations)
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(SUBREL_STATE_INIT));
 
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, nkeys, skey);
@@ -529,8 +544,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		subreltype;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		subreltype = get_rel_relkind(subrel->srrelid);
+
+		/* If only tables were requested, skip the sequences */
+		if (rel_type == SUB_REL_KIND_TABLE && subreltype == RELKIND_SEQUENCE)
+			continue;
+
+		/* If only sequences were requested, skip the tables */
+		if (rel_type == SUB_REL_KIND_SEQUENCE && subreltype != RELKIND_SEQUENCE)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..3318598f08 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -103,6 +103,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -751,6 +752,8 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List	   *sequences;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -781,6 +784,22 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 										InvalidXLogRecPtr, true);
 			}
 
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										 rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
@@ -847,12 +866,25 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * If the copy_data parameter is true, the function will set the state
+ * to "init"; otherwise, it will set the state to "ready". When the
+ * validate_publications is provided with a publication list, the function
+ * checks that the  specified publications exist on the publisher. If
+ * refresh_all_sequences is  true, it will mark all sequences with "init" state
+ * for re-synchronization; otherwise, only the newly added relations and
+ * sequences will be updated based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications,
+						  bool refresh_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -885,14 +917,21 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 	PG_TRY();
 	{
+		SubscriptionRelKind reltype = refresh_all_sequences ?
+			SUB_REL_KIND_SEQUENCE : SUB_REL_KIND_ALL;
+
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (reltype == SUB_REL_KIND_ALL)
+			/* Get the table list from publisher. */
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names, fetch_sequence_list(wrconn, sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, reltype, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +950,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (!refresh_all_sequences)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -973,6 +1013,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,30 +1035,37 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
+				RemoveSubscriptionRel(sub->oid, relid);
+
 				sub_remove_rels[remove_rel_len].relid = relid;
 				sub_remove_rels[remove_rel_len++].state = state;
 
-				RemoveSubscriptionRel(sub->oid, relid);
-
-				logicalrep_worker_stop(sub->oid, relid);
+				/*
+				 * Since one sequence sync workers synchronizes all the
+				 * sequences, stop the worker only if relation kind is not
+				 * sequence.
+				 */
+				if (relkind != RELKIND_SEQUENCE)
+					logicalrep_worker_stop(sub->oid, relid);
 
 				/*
 				 * For READY state, we would have already dropped the
 				 * tablesync origin.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)
 				{
 					char		originname[NAMEDATALEN];
 
 					/*
 					 * Drop the tablesync's origin tracking if exists.
 					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * It is possible that the origin is not yet created
+					 * for tablesync worker, this can happen for the
+					 * states before SUBREL_STATE_FINISHEDCOPY. The
+					 * tablesync worker or apply worker can also
+					 * concurrently try to drop the origin and by this
+					 * time the origin might be already removed. For these
+					 * reasons, passing missing_ok = true.
 					 */
 					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
 													   sizeof(originname));
@@ -1025,10 +1073,25 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name,
+										 get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table")));
+			}
+			/*
+			 * In case of REFRESH PUBLICATION SEQUENCES, the existing sequences
+			 * should be re-synchronized.
+			 */
+			else if (refresh_all_sequences)
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
 			}
 		}
 
@@ -1039,6 +1102,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1490,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1505,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, false);
 				}
 
 				break;
@@ -1479,8 +1545,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1564,27 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1619,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, false);
 
 				break;
 			}
@@ -1804,7 +1884,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, SUB_REL_KIND_TABLE, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2242,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2420,62 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "      FROM pg_catalog.pg_publication_sequences s\n"
+						   "      WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index ead629906e..a04cf2beb1 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10835,11 +10835,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..f8dd93a83a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..76b934234b 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -237,7 +237,8 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given
  * subscription id and relid.
  *
- * We are only interested in the leader apply worker or table sync worker.
+ * We are only interested in the leader apply worker, table sync worker and
+ * sequence sync worker.
  */
 LogicalRepWorker *
 logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
@@ -267,6 +268,38 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 	return res;
 }
 
+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
+{
+	LogicalRepWorker *res = NULL;
+
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	/* Search for attached worker for a given subscription id. */
+	for (int i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		/* Skip non sequence sync workers. */
+		if (!isSequenceSyncWorker(w))
+			continue;
+
+		if (w->in_use && w->subid == subid && (only_running && w->proc))
+		{
+			res = w;
+			break;
+		}
+	}
+
+	return res;
+}
+
 /*
  * Similar to logicalrep_worker_find(), but returns a list of all workers for
  * the subscription, instead of just one.
@@ -297,6 +330,26 @@ logicalrep_workers_find(Oid subid, bool only_running, bool acquire_lock)
 	return res;
 }
 
+/*
+ * Return the pid of the apply worker for one that matches given
+ * subscription id.
+ */
+static LogicalRepWorker *
+logicalrep_apply_worker_find(Oid subid, bool only_running)
+{
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	for (int i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		if (isApplyWorker(w) && w->subid == subid && only_running && w->proc)
+			return w;
+	}
+
+	return NULL;
+}
+
 /*
  * Start new logical replication background worker, if possible.
  *
@@ -317,6 +370,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -324,11 +378,15 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - must be valid worker type
 	 * - tablesync workers are only ones to have relid
 	 * - parallel apply worker is the only kind of subworker
+	 * - sequencesync workers will not have relid
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
 	Assert(is_tablesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
 
+	if (is_sequencesync_worker)
+		Assert(!OidIsValid(relid));
+
 	ereport(DEBUG1,
 			(errmsg_internal("starting logical replication worker for subscription \"%s\"",
 							 subname)));
@@ -402,7 +460,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +548,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +556,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -815,6 +882,26 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time for the sequence sync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequence sync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+	worker = logicalrep_apply_worker_find(MyLogicalRepWorker->subid, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +950,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (isTableSyncWorker(w) && w->subid == subid)
 			res++;
 	}
 
@@ -1314,7 +1401,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1444,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..7b1d071a81
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,356 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Fetch sequence data (current state) from the remote node, including
+ * the latest sequence value from the publisher and the Page LSN for the
+ * sequence.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {INT8OID, LSNOID};
+	int64		value = (Datum) 0;
+
+	initStringInfo(&cmd);
+
+	/*
+	 * In the event of crash we can lose (skip over) as many values as we
+	 * pre-logged. We might get duplicate values in this kind of scenarios. So
+	 * use (last_value + log_cnt) to avoid it.
+	 */
+	appendStringInfo(&cmd, "SELECT (last_value + log_cnt), page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		bool		isnull;
+
+		value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		*lsn = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * withe the retreived value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	sequence_value = fetch_remote_sequence_data(conn, remoteid, &lsn);
+
+	SetSequenceLastValue(RelationGetRelid(rel), sequence_value);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	/* Get the sequences that should be synchronized. */
+	StartTransactionCommand();
+	sequences = GetSubscriptionRelations(subid, SUB_REL_KIND_SEQUENCE, true);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequence sync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		/*
+		 * Verify whether the current batch of sequences is synchronized or if
+		 * there are no remaining sequences to synchronize.
+		 */
+		if ((((curr_seq + 1) % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			(curr_seq + 1) == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i <= curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+		curr_seq++;
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequence sync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..313e5eb357 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -132,16 +132,16 @@ typedef enum
 
 static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+static bool FetchTableStates(bool *started_tx, SubscriptionRelKind rel_type);
 
 static StringInfo copybuf = NULL;
 
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
@@ -157,15 +157,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* No need to set the sequence failure time when it is a clean exit */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -387,7 +396,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -430,7 +439,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchTableStates(&started_tx, SUB_REL_KIND_ALL);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -463,6 +472,17 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	foreach(lc, table_states_not_ready)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		char		relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind == RELKIND_SEQUENCE)
+			continue;
 
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
@@ -477,11 +497,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -660,6 +675,108 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence synchronization worker running already, no need to
+ * start a new one; the existing sequence sync worker will synchronize all the
+ * sequences. If there are still any sequences to be synced after the sequence
+ * sync worker exited, then a new sequence sync worker can be started in the
+ * next iteration. To prevent starting the sequence sync worker at a high
+ * frequency after a failure, we store its last failure time. We start the sync
+ * worker for the same relation after waiting at least
+ * wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription sequences here. */
+	FetchTableStates(&started_tx, SUB_REL_KIND_ALL);
+
+	/*
+	 * Start sequence sync worker if there is not one already.
+	 */
+	foreach_ptr(SubscriptionRelState, rstate, table_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		char		relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
+														  true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int			nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence
+			 * sync worker, and break from the loop.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				TimestampTz now = GetCurrentTimestamp();
+
+				if (!MyLogicalRepWorker->sequencesync_failure_time ||
+					TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+											   now, wal_retrieve_retry_interval))
+				{
+					MyLogicalRepWorker->sequencesync_failure_time = 0;
+					logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											 MyLogicalRepWorker->dbid,
+											 MySubscription->oid,
+											 MySubscription->name,
+											 MyLogicalRepWorker->userid,
+											 InvalidOid,
+											 DSM_HANDLE_INVALID);
+					break;
+				}
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -683,6 +800,12 @@ process_syncing_tables(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -1320,7 +1443,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1570,7 +1693,7 @@ copy_table_done:
  * then it is the caller's responsibility to commit it.
  */
 static bool
-FetchTableStates(bool *started_tx)
+FetchTableStates(bool *started_tx, SubscriptionRelKind rel_type)
 {
 	static bool has_subrels = false;
 
@@ -1596,7 +1719,7 @@ FetchTableStates(bool *started_tx)
 		}
 
 		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		rstates = GetSubscriptionRelations(MySubscription->oid, rel_type, true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -1709,7 +1832,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1717,7 +1840,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1735,7 +1858,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchTableStates(&started_tx, SUB_REL_KIND_TABLE);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index ec96b5fe85..e60e52dff2 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4531,8 +4536,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4611,6 +4616,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4619,14 +4628,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4658,6 +4670,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  invalidate_syncing_table_states,
 								  (Datum) 0);
+
+	if (isSequenceSyncWorker(MyLogicalRepWorker))
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index be0ed1fc27..0c5601af82 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 1a949966e0..e09e321ab1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11968,6 +11968,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 0aa14ec4a2..8c96f0ce72 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -159,6 +159,12 @@ typedef struct Subscription
 								 * specified origin */
 } Subscription;
 
+typedef struct SubscriptionSeqInfo
+{
+	Oid			seqid;
+	XLogRecPtr	lsn;
+} SubscriptionSeqInfo;
+
 /* Disallow streaming in-progress transactions. */
 #define LOGICALREP_STREAM_OFF 'f'
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..5584136a35 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,13 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef enum
+{
+	SUB_REL_KIND_TABLE,
+	SUB_REL_KIND_SEQUENCE,
+	SUB_REL_KIND_ALL,
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,6 +97,7 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, SubscriptionRelKind reltype,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..adb1c6e32b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -242,6 +245,8 @@ extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
+extern LogicalRepWorker *logicalrep_sequence_sync_worker_find(Oid subid,
+															  bool only_running);
 extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
@@ -253,6 +258,10 @@ extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
+
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -326,15 +335,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4c789279e5..bd5efd5d27 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..8f4871de1e
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,156 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(
+	allows_streaming => 'logical',
+	checkpoint_timeout => '1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+# Subscriber should have last_value as last_value + log_cnt (100 + 32) value
+# from the publisher. Refer comments in fetch_remote_sequence_data for details.
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '132|0|t', 'initial test data replicated');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '132|0|t', 'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '132|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are syned
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '231|0|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '231|0|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '132|0|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 90326c6e53..a875c196a8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2772,7 +2772,9 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
+SubscriptionSeqInfo
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
 SupportRequestIndexCondition
-- 
2.34.1

v20240724-0001-Introduce-pg_sequence_state-and-SetSequenc.patchtext/x-patch; charset=US-ASCII; name=v20240724-0001-Introduce-pg_sequence_state-and-SetSequenc.patchDownload
From 4052fd3f55b4f812d63095eb4fe5cff2aa6cd321 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240724 1/3] Introduce pg_sequence_state and
 SetSequenceLastValue functions for enhanced sequence management

This patch introduces a couple of new functions: pg_sequence_state function
allows retrieval of sequence values including LSN. The SetSequenceLastValue
function enables updating sequences with specified values.
---
 doc/src/sgml/func.sgml                 |  27 ++++
 src/backend/commands/sequence.c        | 169 +++++++++++++++++++++++--
 src/include/catalog/pg_proc.dat        |   8 ++
 src/include/commands/sequence.h        |   1 +
 src/test/regress/expected/sequence.out |  12 ++
 src/test/regress/sql/sequence.sql      |   2 +
 6 files changed, 211 insertions(+), 8 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index fd5699f4d8..958a6c2a1d 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19576,6 +19576,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ()
+        <returnvalue>record</returnvalue>
+        ( <parameter>lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>lsn</literal> is the
+        page LSN of the sequence, <literal>last_value</literal> is the most
+        recent value returned by <function>nextval</function> in the current
+        session, <literal>log_cnt</literal> shows how many fetches remain
+        before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 9f28d40466..bff990afa7 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,83 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ *
+ * Note: This function resembles do_setval but does not include the locking and
+ * verification steps, as those are managed in a slightly different manner for
+ * logical replication.
+ */
+void
+SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* check the comment above nextval_internal()'s equivalent call. */
+	if (RelationNeedsWAL(seqrel))
+	{
+		GetTopTransactionId();
+
+		if (XLogLogicalInfoActive())
+			GetCurrentTransactionId();
+	}
+
+	/* ready to change the on-disk (or really, in-buffer) tuple */
+	START_CRIT_SECTION();
+
+	seq->last_value = new_last_value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	MarkBufferDirty(buf);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(seqrel))
+	{
+		xl_seq_rec      xlrec;
+		XLogRecPtr      recptr;
+		Page            page = BufferGetPage(buf);
+
+		XLogBeginInsert();
+		XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+		xlrec.locator = seqrel->rd_locator;
+
+		XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+		XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+		recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buf);
+
+	/*
+	 * Clear local cache so that we don't think we have cached numbers.
+	 * Note that we do not change the currval() state.
+	 */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +555,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +637,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +766,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +1062,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1262,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1287,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1807,7 +1894,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1822,6 +1909,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 73d9cf8582..1a949966e0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..003f2e3413 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 2b47b7796b..cbcd65f499 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 674f5f1f66..5fcb36341d 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

#83vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#78)
Re: Logical Replication of sequences

On Tue, 23 Jul 2024 at 11:03, Peter Smith <smithpb2250@gmail.com> wrote:

Here are some review comments for patch v20240720-0002.

======
1. Commit message:

1a.
The commit message is stale. It is still referring to functions and
views that have been moved to patch 0003.

Modified

1b.
"ALL SEQUENCES" is not a command. It is a clause of the CREATE
PUBLICATION command.

Modified

src/bin/psql/describe.c

2. describeOneTableDetails

Although it's not the fault of this patch, this patch propagates the
confusion of 'result' versus 'res'. Basically, I did not understand
the need for the variable 'result'. There is already a "PGResult
*res", and unless I am mistaken we can just keep re-using that instead
of introducing a 2nd variable having almost the same name and purpose.

This is intentional, we cannot clear res as it will be used in many
places of printTable like in
printTable->print_aligned_text->pg_wcssize which was earlier stored
from printTableAddCell calls.

The rest of the nitpicks comments were merged.
The v20240724 version patch attached at [1]/messages/by-id/CALDaNm1uncevCSMqo5Nk=tqqV_o3KNH_jwp8URiGop_nPC8BTg@mail.gmail.com has the changes for the same.

[1]: /messages/by-id/CALDaNm1uncevCSMqo5Nk=tqqV_o3KNH_jwp8URiGop_nPC8BTg@mail.gmail.com

Regards,
Vignesh

#84shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#81)
Re: Logical Replication of sequences

On Wed, Jul 24, 2024 at 11:52 AM shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Jul 24, 2024 at 9:17 AM Peter Smith <smithpb2250@gmail.com> wrote:

I had a look at patches v20240720* (considering these as the latest
one) and tried to do some basic testing (WIP). Few comments:

1)
I see 'last_value' is updated wrongly after create-sub. Steps:

-----------
pub:
CREATE SEQUENCE myseq0 INCREMENT 5 START 100;
SELECT nextval('myseq0');
SELECT nextval('myseq0');
--last_value on pub is 105
select * from pg_sequences;
create publication pub1 for all tables, sequences;

Sub:
CREATE SEQUENCE myseq0 INCREMENT 5 START 100;
create subscription sub1 connection 'dbname=postgres host=localhost
user=shveta port=5433' publication pub1;

--check 'r' state is reached
select pc.relname, pr.srsubstate, pr.srsublsn from pg_subscription_rel
pr, pg_class pc where (pr.srrelid = pc.oid);

--check 'last_value', it shows some random value as 136
select * from pg_sequences;

Okay, I see that in fetch_remote_sequence_data(), we are inserting
'last_value + log_cnt' fetched from remote as 'last_val' on subscriber
and thus leading to above behaviour. I did not understand why this is
done? This may result into issue when we insert data into a table with
identity column on subscriber (whose internal sequence is replicated);
the identity column in this case will end up having wrong value.

thanks
Shveta

#85vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#81)
3 attachment(s)
Re: Logical Replication of sequences

On Wed, 24 Jul 2024 at 11:53, shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Jul 24, 2024 at 9:17 AM Peter Smith <smithpb2250@gmail.com> wrote:

I had a look at patches v20240720* (considering these as the latest
one) and tried to do some basic testing (WIP). Few comments:

1)
I see 'last_value' is updated wrongly after create-sub. Steps:

-----------
pub:
CREATE SEQUENCE myseq0 INCREMENT 5 START 100;
SELECT nextval('myseq0');
SELECT nextval('myseq0');
--last_value on pub is 105
select * from pg_sequences;
create publication pub1 for all tables, sequences;

Sub:
CREATE SEQUENCE myseq0 INCREMENT 5 START 100;
create subscription sub1 connection 'dbname=postgres host=localhost
user=shveta port=5433' publication pub1;

--check 'r' state is reached
select pc.relname, pr.srsubstate, pr.srsublsn from pg_subscription_rel
pr, pg_class pc where (pr.srrelid = pc.oid);

--check 'last_value', it shows some random value as 136
select * from pg_sequences;

Earlier I was setting sequence value with the value of publisher +
log_cnt, that is why the difference is there. On further thinking
since we are not supporting incremental replication of sequences, so
no plugin usage is involved which requires the special decoding last
value and log_count. I felt we can use the exact sequence last value
and log count to generate the similar sequence value. So Now I have
changed it to get the last_value and log_count from the publisher and
set it to the same values.

2)
I can use 'for all sequences' only with 'for all tables' and can not
use it with the below. Shouldn't it be allowed?

create publication pub2 for tables in schema public, for all sequences;
create publication pub2 for table t1, for all sequences;

I feel this can be added as part of a later version while supporting
"add/drop/set sequence and add/drop/set sequences in schema" once the
patch is stable.

3)
preprocess_pub_all_objtype_list():
Do we need 'alltables_specified' and 'allsequences_specified' ? Can't
we make a repetition check using *alltables and *allsequences?

Modified

4) patch02's commit msg says : 'Additionally, a new system view,
pg_publication_sequences, has been introduced'
But it is not part of patch002.

This is removed now

The attached v20240725 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20240725-0001-Introduce-pg_sequence_state-and-SetSequenc.patchtext/x-patch; charset=US-ASCII; name=v20240725-0001-Introduce-pg_sequence_state-and-SetSequenc.patchDownload
From 69e1e8b2efcd9b82d70baa06debad9d46fb9fe43 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240725 1/3] Introduce pg_sequence_state and
 SetSequenceLastValue functions for enhanced sequence management

This patch introduces a couple of new functions: pg_sequence_state function
allows retrieval of sequence values including LSN. The SetSequenceLastValue
function enables updating sequences with specified values.
---
 doc/src/sgml/func.sgml                 |  27 ++++
 src/backend/commands/sequence.c        | 169 +++++++++++++++++++++++--
 src/include/catalog/pg_proc.dat        |   8 ++
 src/include/commands/sequence.h        |   1 +
 src/test/regress/expected/sequence.out |  12 ++
 src/test/regress/sql/sequence.sql      |   2 +
 6 files changed, 211 insertions(+), 8 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index fd5699f4d8..958a6c2a1d 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19576,6 +19576,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ()
+        <returnvalue>record</returnvalue>
+        ( <parameter>lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>lsn</literal> is the
+        page LSN of the sequence, <literal>last_value</literal> is the most
+        recent value returned by <function>nextval</function> in the current
+        session, <literal>log_cnt</literal> shows how many fetches remain
+        before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 9f28d40466..bff990afa7 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,83 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ *
+ * Note: This function resembles do_setval but does not include the locking and
+ * verification steps, as those are managed in a slightly different manner for
+ * logical replication.
+ */
+void
+SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* check the comment above nextval_internal()'s equivalent call. */
+	if (RelationNeedsWAL(seqrel))
+	{
+		GetTopTransactionId();
+
+		if (XLogLogicalInfoActive())
+			GetCurrentTransactionId();
+	}
+
+	/* ready to change the on-disk (or really, in-buffer) tuple */
+	START_CRIT_SECTION();
+
+	seq->last_value = new_last_value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	MarkBufferDirty(buf);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(seqrel))
+	{
+		xl_seq_rec      xlrec;
+		XLogRecPtr      recptr;
+		Page            page = BufferGetPage(buf);
+
+		XLogBeginInsert();
+		XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+		xlrec.locator = seqrel->rd_locator;
+
+		XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+		XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+		recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buf);
+
+	/*
+	 * Clear local cache so that we don't think we have cached numbers.
+	 * Note that we do not change the currval() state.
+	 */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +555,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +637,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +766,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +1062,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1262,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1287,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1807,7 +1894,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1822,6 +1909,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 73d9cf8582..1a949966e0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..003f2e3413 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 2b47b7796b..cbcd65f499 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 674f5f1f66..5fcb36341d 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240725-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240725-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 18acfbf7a7d23170ca184375dd9427bfb7fd0c23 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240725 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  37 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  79 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 694 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..7dcfe37ffe 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,13 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that synchronizes all the sequences:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..585f61e414 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *allbjects_list,
+											bool *alltables,
+											bool *allsequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10561,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10586,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10704,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19439,44 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to check if the options have been specified more than
+ * once and set alltables/allsequences.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *alltables,
+								bool *allsequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*alltables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*alltables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*allsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*allsequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b8b1888bd3..2fbaf027a9 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4212,6 +4212,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4228,23 +4229,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4256,6 +4263,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4275,6 +4283,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4322,8 +4332,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..5054be0fd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..a92af54905 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples = 0;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 18000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 891face1b6..be0ed1fc27 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b4d7f9217c..90326c6e53 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2247,6 +2247,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240725-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240725-0003-Enhance-sequence-synchronization-during-su.patchDownload
From 130f4be8fc6f2523f57eaab63216a239db68443c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 24 Jul 2024 11:24:57 +0530
Subject: [PATCH v20240725 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/catalogs.sgml                    |  13 +-
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  39 +-
 doc/src/sgml/system-views.sgml                |  67 ++++
 src/backend/catalog/pg_publication.c          |  46 +++
 src/backend/catalog/pg_subscription.c         |  35 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |   4 +-
 src/backend/commands/subscriptioncmds.c       | 204 ++++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    | 100 ++++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 356 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 163 +++++++-
 src/backend/replication/logical/worker.c      |  23 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription.h         |   6 +
 src/include/catalog/pg_subscription_rel.h     |  10 +-
 src/include/commands/sequence.h               |   2 +-
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  23 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/t/034_sequences.pl      | 154 ++++++++
 src/tools/pgindent/typedefs.list              |   2 +
 31 files changed, 1218 insertions(+), 95 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..19d04b107e 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8103,14 +8103,15 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
 
   <para>
    The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   state for each replicated tables and sequences in each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
+   This catalog only contains tables and sequences known to the subscription
+   after running either
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+   or <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
    PUBLICATION</command></link>.
   </para>
 
@@ -8145,7 +8146,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 3dec0b7cfe..2bb4660336 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index ccdd24312b..1d3bc6a285 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1984,8 +1984,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 6af6d0d2c8..d3029b9e91 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -153,30 +154,45 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -195,6 +211,19 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index bdc34cf94e..b893fc2d90 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2147,6 +2152,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..5610c0749c 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -493,12 +494,19 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * If rel_type is SUB_REL_KIND_SEQUENCE, get only the sequences. If rel_type is
+ * SUB_REL_KIND_TABLE, get only the tables. If rel_type is SUB_REL_KIND_ALL,
+ * get both tables and sequences.
+ * If not_all_relations is true for SUB_REL_KIND_TABLE and SUB_REL_KIND_ALL,
+ * return only the relations that are not in a ready state, otherwise return all
+ * the relations of the subscription. If not_all_relations is true for
+ * SUB_REL_KIND_SEQUENCE, return only the sequences that are in init state,
+ * otherwise return all the sequences of the subscription.
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, SubscriptionRelKind rel_type,
+						 bool not_all_relations)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -514,11 +522,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	/* Get the relations that are not in ready state */
+	if (rel_type != SUB_REL_KIND_SEQUENCE && not_all_relations)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
 					CharGetDatum(SUBREL_STATE_READY));
+	/* Get the sequences that are in init state */
+	else if (rel_type == SUB_REL_KIND_SEQUENCE && not_all_relations)
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(SUBREL_STATE_INIT));
 
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, nkeys, skey);
@@ -529,8 +544,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		subreltype;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		subreltype = get_rel_relkind(subrel->srrelid);
+
+		/* If only tables were requested, skip the sequences */
+		if (rel_type == SUB_REL_KIND_TABLE && subreltype == RELKIND_SEQUENCE)
+			continue;
+
+		/* If only sequences were requested, skip the tables */
+		if (rel_type == SUB_REL_KIND_SEQUENCE && subreltype != RELKIND_SEQUENCE)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index bff990afa7..a3e7c791b2 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -342,7 +342,7 @@ ResetSequence(Oid seq_relid)
  * logical replication.
  */
 void
-SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
+SetSequenceLastValue(Oid seq_relid, int64 new_last_value, int64 log_cnt)
 {
 	SeqTable        elm;
 	Relation        seqrel;
@@ -370,7 +370,7 @@ SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
 
 	seq->last_value = new_last_value;
 	seq->is_called = true;
-	seq->log_cnt = 0;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..3318598f08 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -103,6 +103,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -751,6 +752,8 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List	   *sequences;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -781,6 +784,22 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 										InvalidXLogRecPtr, true);
 			}
 
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										 rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
@@ -847,12 +866,25 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * If the copy_data parameter is true, the function will set the state
+ * to "init"; otherwise, it will set the state to "ready". When the
+ * validate_publications is provided with a publication list, the function
+ * checks that the  specified publications exist on the publisher. If
+ * refresh_all_sequences is  true, it will mark all sequences with "init" state
+ * for re-synchronization; otherwise, only the newly added relations and
+ * sequences will be updated based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications,
+						  bool refresh_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -885,14 +917,21 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 	PG_TRY();
 	{
+		SubscriptionRelKind reltype = refresh_all_sequences ?
+			SUB_REL_KIND_SEQUENCE : SUB_REL_KIND_ALL;
+
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (reltype == SUB_REL_KIND_ALL)
+			/* Get the table list from publisher. */
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names, fetch_sequence_list(wrconn, sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, reltype, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +950,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (!refresh_all_sequences)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -973,6 +1013,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,30 +1035,37 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
+				RemoveSubscriptionRel(sub->oid, relid);
+
 				sub_remove_rels[remove_rel_len].relid = relid;
 				sub_remove_rels[remove_rel_len++].state = state;
 
-				RemoveSubscriptionRel(sub->oid, relid);
-
-				logicalrep_worker_stop(sub->oid, relid);
+				/*
+				 * Since one sequence sync workers synchronizes all the
+				 * sequences, stop the worker only if relation kind is not
+				 * sequence.
+				 */
+				if (relkind != RELKIND_SEQUENCE)
+					logicalrep_worker_stop(sub->oid, relid);
 
 				/*
 				 * For READY state, we would have already dropped the
 				 * tablesync origin.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)
 				{
 					char		originname[NAMEDATALEN];
 
 					/*
 					 * Drop the tablesync's origin tracking if exists.
 					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * It is possible that the origin is not yet created
+					 * for tablesync worker, this can happen for the
+					 * states before SUBREL_STATE_FINISHEDCOPY. The
+					 * tablesync worker or apply worker can also
+					 * concurrently try to drop the origin and by this
+					 * time the origin might be already removed. For these
+					 * reasons, passing missing_ok = true.
 					 */
 					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
 													   sizeof(originname));
@@ -1025,10 +1073,25 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name,
+										 get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table")));
+			}
+			/*
+			 * In case of REFRESH PUBLICATION SEQUENCES, the existing sequences
+			 * should be re-synchronized.
+			 */
+			else if (refresh_all_sequences)
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
 			}
 		}
 
@@ -1039,6 +1102,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1490,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1505,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, false);
 				}
 
 				break;
@@ -1479,8 +1545,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1564,27 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1619,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, false);
 
 				break;
 			}
@@ -1804,7 +1884,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, SUB_REL_KIND_TABLE, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2242,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2420,62 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "      FROM pg_catalog.pg_publication_sequences s\n"
+						   "      WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 585f61e414..3f66ddd616 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10835,11 +10835,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..f8dd93a83a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..76b934234b 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -237,7 +237,8 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given
  * subscription id and relid.
  *
- * We are only interested in the leader apply worker or table sync worker.
+ * We are only interested in the leader apply worker, table sync worker and
+ * sequence sync worker.
  */
 LogicalRepWorker *
 logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
@@ -267,6 +268,38 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 	return res;
 }
 
+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
+{
+	LogicalRepWorker *res = NULL;
+
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	/* Search for attached worker for a given subscription id. */
+	for (int i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		/* Skip non sequence sync workers. */
+		if (!isSequenceSyncWorker(w))
+			continue;
+
+		if (w->in_use && w->subid == subid && (only_running && w->proc))
+		{
+			res = w;
+			break;
+		}
+	}
+
+	return res;
+}
+
 /*
  * Similar to logicalrep_worker_find(), but returns a list of all workers for
  * the subscription, instead of just one.
@@ -297,6 +330,26 @@ logicalrep_workers_find(Oid subid, bool only_running, bool acquire_lock)
 	return res;
 }
 
+/*
+ * Return the pid of the apply worker for one that matches given
+ * subscription id.
+ */
+static LogicalRepWorker *
+logicalrep_apply_worker_find(Oid subid, bool only_running)
+{
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	for (int i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		if (isApplyWorker(w) && w->subid == subid && only_running && w->proc)
+			return w;
+	}
+
+	return NULL;
+}
+
 /*
  * Start new logical replication background worker, if possible.
  *
@@ -317,6 +370,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -324,11 +378,15 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - must be valid worker type
 	 * - tablesync workers are only ones to have relid
 	 * - parallel apply worker is the only kind of subworker
+	 * - sequencesync workers will not have relid
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
 	Assert(is_tablesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
 
+	if (is_sequencesync_worker)
+		Assert(!OidIsValid(relid));
+
 	ereport(DEBUG1,
 			(errmsg_internal("starting logical replication worker for subscription \"%s\"",
 							 subname)));
@@ -402,7 +460,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +548,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +556,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -815,6 +882,26 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time for the sequence sync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequence sync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+	worker = logicalrep_apply_worker_find(MyLogicalRepWorker->subid, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +950,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (isTableSyncWorker(w) && w->subid == subid)
 			res++;
 	}
 
@@ -1314,7 +1401,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1444,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..d8b007f9e9
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,356 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Fetch sequence data (current state) from the remote node, including
+ * the latest sequence value from the publisher and the Page LSN for the
+ * sequence.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid,
+						   int64 *log_cnt, XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[3] = {INT8OID, INT8OID, LSNOID};
+	int64		value = (Datum) 0;
+
+	initStringInfo(&cmd);
+
+	appendStringInfo(&cmd, "SELECT last_value, log_cnt, page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 3, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		bool		isnull;
+
+		value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		*lsn = DatumGetInt64(slot_getattr(slot, 3, &isnull));
+		Assert(!isnull);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * withe the retreived value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	int64		log_cnt;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	sequence_value = fetch_remote_sequence_data(conn, remoteid, &log_cnt, &lsn);
+
+	SetSequenceLastValue(RelationGetRelid(rel), sequence_value, log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	/* Get the sequences that should be synchronized. */
+	StartTransactionCommand();
+	sequences = GetSubscriptionRelations(subid, SUB_REL_KIND_SEQUENCE, true);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequence sync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		/*
+		 * Verify whether the current batch of sequences is synchronized or if
+		 * there are no remaining sequences to synchronize.
+		 */
+		if ((((curr_seq + 1) % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			(curr_seq + 1) == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = curr_seq - (curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i <= curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+		curr_seq++;
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequence sync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..313e5eb357 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -132,16 +132,16 @@ typedef enum
 
 static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+static bool FetchTableStates(bool *started_tx, SubscriptionRelKind rel_type);
 
 static StringInfo copybuf = NULL;
 
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
@@ -157,15 +157,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* No need to set the sequence failure time when it is a clean exit */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -387,7 +396,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -430,7 +439,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchTableStates(&started_tx, SUB_REL_KIND_ALL);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -463,6 +472,17 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	foreach(lc, table_states_not_ready)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+		char		relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind == RELKIND_SEQUENCE)
+			continue;
 
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
@@ -477,11 +497,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -660,6 +675,108 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence synchronization worker running already, no need to
+ * start a new one; the existing sequence sync worker will synchronize all the
+ * sequences. If there are still any sequences to be synced after the sequence
+ * sync worker exited, then a new sequence sync worker can be started in the
+ * next iteration. To prevent starting the sequence sync worker at a high
+ * frequency after a failure, we store its last failure time. We start the sync
+ * worker for the same relation after waiting at least
+ * wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription sequences here. */
+	FetchTableStates(&started_tx, SUB_REL_KIND_ALL);
+
+	/*
+	 * Start sequence sync worker if there is not one already.
+	 */
+	foreach_ptr(SubscriptionRelState, rstate, table_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		char		relkind;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		relkind = get_rel_relkind(rstate->relid);
+		if (relkind != RELKIND_SEQUENCE || rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
+														  true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int			nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence
+			 * sync worker, and break from the loop.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				TimestampTz now = GetCurrentTimestamp();
+
+				if (!MyLogicalRepWorker->sequencesync_failure_time ||
+					TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+											   now, wal_retrieve_retry_interval))
+				{
+					MyLogicalRepWorker->sequencesync_failure_time = 0;
+					logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											 MyLogicalRepWorker->dbid,
+											 MySubscription->oid,
+											 MySubscription->name,
+											 MyLogicalRepWorker->userid,
+											 InvalidOid,
+											 DSM_HANDLE_INVALID);
+					break;
+				}
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -683,6 +800,12 @@ process_syncing_tables(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -1320,7 +1443,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1570,7 +1693,7 @@ copy_table_done:
  * then it is the caller's responsibility to commit it.
  */
 static bool
-FetchTableStates(bool *started_tx)
+FetchTableStates(bool *started_tx, SubscriptionRelKind rel_type)
 {
 	static bool has_subrels = false;
 
@@ -1596,7 +1719,7 @@ FetchTableStates(bool *started_tx)
 		}
 
 		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		rstates = GetSubscriptionRelations(MySubscription->oid, rel_type, true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -1709,7 +1832,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1717,7 +1840,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1735,7 +1858,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchTableStates(&started_tx, SUB_REL_KIND_TABLE);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index ec96b5fe85..e60e52dff2 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4531,8 +4536,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4611,6 +4616,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4619,14 +4628,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4658,6 +4670,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  invalidate_syncing_table_states,
 								  (Datum) 0);
+
+	if (isSequenceSyncWorker(MyLogicalRepWorker))
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index be0ed1fc27..0c5601af82 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 1a949966e0..e09e321ab1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11968,6 +11968,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 0aa14ec4a2..8c96f0ce72 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -159,6 +159,12 @@ typedef struct Subscription
 								 * specified origin */
 } Subscription;
 
+typedef struct SubscriptionSeqInfo
+{
+	Oid			seqid;
+	XLogRecPtr	lsn;
+} SubscriptionSeqInfo;
+
 /* Disallow streaming in-progress transactions. */
 #define LOGICALREP_STREAM_OFF 'f'
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..5584136a35 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,13 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef enum
+{
+	SUB_REL_KIND_TABLE,
+	SUB_REL_KIND_SEQUENCE,
+	SUB_REL_KIND_ALL,
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,6 +97,7 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, SubscriptionRelKind reltype,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 003f2e3413..a302890156 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,7 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
-extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value);
+extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..adb1c6e32b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -242,6 +245,8 @@ extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
+extern LogicalRepWorker *logicalrep_sequence_sync_worker_find(Oid subid,
+															  bool only_running);
 extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
@@ -253,6 +258,10 @@ extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
+
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -326,15 +335,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4c789279e5..bd5efd5d27 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..adf7b0cfc1
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,154 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(
+	allows_streaming => 'logical',
+	checkpoint_timeout => '1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are syned
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 90326c6e53..a875c196a8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2772,7 +2772,9 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
+SubscriptionSeqInfo
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
 SupportRequestIndexCondition
-- 
2.34.1

#86shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#85)
Re: Logical Replication of sequences

On Thu, Jul 25, 2024 at 9:06 AM vignesh C <vignesh21@gmail.com> wrote:

The attached v20240725 version patch has the changes for the same.

Thank You for addressing the comments. Please review below issues:

1) Sub ahead of pub due to wrong initial sync of last_value for
non-incremented sequences. Steps at [1]----------- on PUB: CREATE SEQUENCE myseq001 INCREMENT 5 START 100; SELECT * from pg_sequences; -->shows last_val as NULL
2) Sequence's min value is not honored on sub during replication. Steps at [2]----------- Pub: CREATE SEQUENCE myseq0 INCREMENT 5 START 10; SELECT * from pg_sequences;

[1]: ----------- on PUB: CREATE SEQUENCE myseq001 INCREMENT 5 START 100; SELECT * from pg_sequences; -->shows last_val as NULL
-----------
on PUB:
CREATE SEQUENCE myseq001 INCREMENT 5 START 100;
SELECT * from pg_sequences; -->shows last_val as NULL

on SUB:
CREATE SEQUENCE myseq001 INCREMENT 5 START 100;
SELECT * from pg_sequences; -->correctly shows last_val as NULL
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
SELECT * from pg_sequences; -->wrongly updates last_val to 100; it is
still NULL on Pub.

Thus , SELECT nextval('myseq001') on pub gives 100, while on sub gives 105.
-----------

[2]: ----------- Pub: CREATE SEQUENCE myseq0 INCREMENT 5 START 10; SELECT * from pg_sequences;
-----------
Pub:
CREATE SEQUENCE myseq0 INCREMENT 5 START 10;
SELECT * from pg_sequences;

Sub:
CREATE SEQUENCE myseq0 INCREMENT 5 MINVALUE 100;

Pub:
SELECT nextval('myseq0');
SELECT nextval('myseq0');

Sub:
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
--check 'last_value', it is 15 while min_value is 100
SELECT * from pg_sequences;
-----------

thanks
Shveta

#87Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#75)
1 attachment(s)
Re: Logical Replication of sequences

Hi, here are more review comments for patch v20240720-0003.

======
src/backend/catalog/pg_subscription.c

(Numbers are starting at #4 because this is a continuation of the docs review)

4. GetSubscriptionRelations

nitpick - rearranged the function header comment

~

5.
TBH, I'm thinking that just passing 2 parameters:
- bool get_tables
- bool get_sequences
where one or both can be true, would have resulted in simpler code,
instead of introducing this new enum SubscriptionRelKind.

~

6.
The 'not_all_relations' parameter/logic feels really awkward. IMO it
needs a better name and reverse the meaning to remove all the "nots".

For example, commenting it and calling it like below could be much simpler.

'all_relations'
If returning sequences, if all_relations=true get all sequences,
otherwise only get sequences that are in 'init' state.
If returning tables, if all_relation=true get all tables, otherwise
only get tables that have not reached 'READY' state.

======
src/backend/commands/subscriptioncmds.c

AlterSubscription_refresh:

nitpick - this function comment is difficult to understand. I've
rearranged it a bit but it could still do with some further
improvement.
nitpick - move some code comments
nitpick - I adjusted the "stop worker" comment slightly. Please check
it is still correct.
nitpick - add a blank line

~

7.
The logic seems over-complicated. For example, why is the sequence
list *always* fetched, but the tables list is only sometimes fetched?
Furthermore, this 'refresh_all_sequences' parameter seems to have a
strange interference with tables (e.g. even though it is possible to
refresh all tables and sequences at the same time). It is as if the
meaning is 'refresh_publication_sequences' yet it is not called that
(???)

These gripes may be related to my other thread [1]syntax - /messages/by-id/CAHut+PuFH1OCj-P1UKoRQE2X4-0zMG+N1V7jdn=tOQV4RNbAbw@mail.gmail.com about the new ALTER
syntax. (I feel that there should be the ability to refresh ALL TABLES
or ALL SEQUENCES independently if the user wants to). IIUC, it would
simplify this function logic as well as being more flexible. Anyway, I
will leave the discussion about syntax to that other thread.

~

8.
+ if (relkind != RELKIND_SEQUENCE)
+ logicalrep_worker_stop(sub->oid, relid);
  /*
  * For READY state, we would have already dropped the
  * tablesync origin.
  */
- if (state != SUBREL_STATE_READY)
+ if (state != SUBREL_STATE_READY && relkind != RELKIND_SEQUENCE)

It might be better to have a single "if (relkind != RELKIND_SEQUENCE)"
here and combine both of these codes under that.

~

9.
  ereport(DEBUG1,
- (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ (errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name,
+ get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table")));

IIUC prior conDitions mean get_rel_relkind(relid) == RELKIND_SEQUENCE
will be impossible here.

~~~

10. AlterSubscription

+ PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ...
REFRESH PUBLICATION SEQUENCES");

IIUC the docs page for ALTER SUBSCRIPTION was missing this information
about "REFRESH PUBLICATION SEQUENCES" in transactions. Docs need more
updates.

======
src/backend/replication/logical/launcher.c

logicalrep_worker_find:
nitpick - tweak comment to say "or" instead of "and"

~~~

11.
+/*
+ * Return the pid of the apply worker for one that matches given
+ * subscription id.
+ */
+static LogicalRepWorker *
+logicalrep_apply_worker_find(Oid subid, bool only_running)

The function comment is wrong. This is not returning a PID.

~~~

12.
+ if (is_sequencesync_worker)
+ Assert(!OidIsValid(relid));

Should we the Assert to something more like:
Assert(!is_sequencesync_worker || !OidIsValid(relid));

Otherwise, in NODEBUG current code will compile into an empty
condition statement, which is a bit odd.

~~~

logicalrep_seqsyncworker_failuretime:
nitpick - tweak function comment
nitpick - add blank line

======
.../replication/logical/sequencesync.c

13. fetch_remote_sequence_data

The "current state" mentioned in the function comment is a bit vague.
Can't tell from this comment what it is returning without looking
deeper into the function code.

~

nitpick - typo "scenarios" in comment

~~~

copy_sequence:
nitpick - typo "withe" in function comment
nitpick - typo /retreived/retrieved/
nitpick - add/remove blank lines

~~~

LogicalRepSyncSequences:
nitpick - move a comment.
nitpick - remove blank line

14.
+ /*
+ * Verify whether the current batch of sequences is synchronized or if
+ * there are no remaining sequences to synchronize.
+ */
+ if ((((curr_seq + 1) % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+ (curr_seq + 1) == seq_count)

All this "curr_seq + 1" maths seems unnecessarily tricky. Can't we
just increment the cur_seq? before this calculation?

~

nitpick - simplify the comment about batching
nitpick - added a comment to the commit

======
src/backend/replication/logical/tablesync.c

finish_sync_worker:
nitpick - added an Assert so the if/else is less risky.
nitpick - modify the comment about failure time when it is a clean exit

~~~

15. process_syncing_sequences_for_apply

+ /* We need up-to-date sync state info for subscription sequences here. */
+ FetchTableStates(&started_tx, SUB_REL_KIND_ALL);

Should that say SUB_REL_KIND_SEQUENCE?

~

16.
+ /*
+ * If there are free sync worker slot(s), start a new sequence
+ * sync worker, and break from the loop.
+ */
+ if (nsyncworkers < max_sync_workers_per_subscription)

Should this "if" have some "else" code to log a warning if we have run
out of free workers? Otherwise, how will the user know that the system
may need tuning?

~~~

17. FetchTableStates

  /* Fetch all non-ready tables. */
- rstates = GetSubscriptionRelations(MySubscription->oid, true);
+ rstates = GetSubscriptionRelations(MySubscription->oid, rel_type, true);

This feels risky. IMO there needs to be some prior Assert about the
rel_type. For example, if it happened to be SUB_REL_KIND_SEQUENCE then
this function code doesn't seem to make sense.

~~~

======
src/backend/replication/logical/worker.c

18. SetupApplyOrSyncWorker

+
+ if (isSequenceSyncWorker(MyLogicalRepWorker))
+ before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);

Probably that should be using macro am_sequencesync_worker(), right?

======
src/include/catalog/pg_subscription_rel.h

19.
+typedef enum
+{
+ SUB_REL_KIND_TABLE,
+ SUB_REL_KIND_SEQUENCE,
+ SUB_REL_KIND_ALL,
+} SubscriptionRelKind;
+

I was not sure how helpful this is; it might not be needed. e.g. see
review comment for GetSubscriptionRelations

~~~

20.
+extern List *GetSubscriptionRelations(Oid subid, SubscriptionRelKind reltype,
+   bool not_ready);

There is a mismatch with the ‘not_ready’ parameter name here and in
the function implementation

======
src/test/subscription/t/034_sequences.pl

nitpick - removed a blank line

======
99.
Please also see the attached diffs patch which implements all the
nitpicks mentioned above.

======
[1]: syntax - /messages/by-id/CAHut+PuFH1OCj-P1UKoRQE2X4-0zMG+N1V7jdn=tOQV4RNbAbw@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240725_SEQ_0003.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240725_SEQ_0003.txtDownload
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 5610c07..04d322a 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -494,14 +494,18 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If rel_type is SUB_REL_KIND_SEQUENCE, get only the sequences. If rel_type is
- * SUB_REL_KIND_TABLE, get only the tables. If rel_type is SUB_REL_KIND_ALL,
- * get both tables and sequences.
+ * rel_type:
+ * If SUB_REL_KIND_SEQUENCE, return only the sequences.
+ * If SUB_REL_KIND_TABLE, return only the tables.
+ * If SUB_REL_KIND_ALL, return both tables and sequences.
+ *
+ * not_all_relations:
  * If not_all_relations is true for SUB_REL_KIND_TABLE and SUB_REL_KIND_ALL,
  * return only the relations that are not in a ready state, otherwise return all
  * the relations of the subscription. If not_all_relations is true for
  * SUB_REL_KIND_SEQUENCE, return only the sequences that are in init state,
  * otherwise return all the sequences of the subscription.
+ *
  * The returned list is palloc'ed in the current memory context.
  */
 List *
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d23901a..2f9ff8b 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -879,11 +879,13 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
  * Update the subscription to refresh both the publication and the publication
  * objects associated with the subscription.
  *
- * If the copy_data parameter is true, the function will set the state
- * to "init"; otherwise, it will set the state to "ready". When the
- * validate_publications is provided with a publication list, the function
- * checks that the  specified publications exist on the publisher. If
- * refresh_all_sequences is  true, it will mark all sequences with "init" state
+ * If 'copy_data' parameter is true, the function will set the state
+ * to "init"; otherwise, it will set the state to "ready".
+ *
+ * When 'validate_publications' is provided with a publication list, the function
+ * checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_all_sequences' is true, it will mark all sequences with "init" state
  * for re-synchronization; otherwise, only the newly added relations and
  * sequences will be updated based on the copy_data parameter.
  */
@@ -932,8 +934,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
+		/* Get the table list from publisher. */
 		if (reltype == SUB_REL_KIND_ALL)
-			/* Get the table list from publisher. */
 			pubrel_names = fetch_table_list(wrconn, sub->publications);
 
 		/* Get the sequence list from publisher. */
@@ -1050,9 +1052,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * Since one sequence sync workers synchronizes all the
-				 * sequences, stop the worker only if relation kind is not
-				 * sequence.
+				 * A single sequence-sync worker synchronizes all sequences,
+				 * so only stop workers when relation kind is not sequence.
 				 */
 				if (relkind != RELKIND_SEQUENCE)
 					logicalrep_worker_stop(sub->oid, relid);
@@ -1088,6 +1089,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										 sub->name,
 										 get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table")));
 			}
+
 			/*
 			 * In case of REFRESH PUBLICATION SEQUENCES, the existing sequences
 			 * should be re-synchronized.
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 17759c2..86be218 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -237,7 +237,7 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given
  * subscription id and relid.
  *
- * We are only interested in the leader apply worker, table sync worker and
+ * We are only interested in the leader apply worker, table sync worker, or
  * sequence sync worker.
  */
 LogicalRepWorker *
@@ -877,7 +877,7 @@ logicalrep_launcher_onexit(int code, Datum arg)
 }
 
 /*
- * Update the failure time for the sequence sync worker in the subscription's
+ * Update the failure time of the sequence sync worker in the subscription's
  * apply worker.
  *
  * This function is invoked when the sequence sync worker exits due to a
@@ -889,6 +889,7 @@ logicalrep_seqsyncworker_failuretime(int code, Datum arg)
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
 	worker = logicalrep_apply_worker_find(MyLogicalRepWorker->subid, true);
 	if (worker)
 		worker->sequencesync_failure_time = GetCurrentTimestamp();
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 7b1d071..45782c6 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -49,7 +49,7 @@ fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
 
 	/*
 	 * In the event of crash we can lose (skip over) as many values as we
-	 * pre-logged. We might get duplicate values in this kind of scenarios. So
+	 * pre-logged. We might get duplicate values in this kind of scenario. So
 	 * use (last_value + log_cnt) to avoid it.
 	 */
 	appendStringInfo(&cmd, "SELECT (last_value + log_cnt), page_lsn "
@@ -87,7 +87,7 @@ fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, XLogRecPtr *lsn)
  * Copy existing data of a sequence from publisher.
  *
  * Fetch the sequence value from the publisher and set the subscriber sequence
- * withe the retreived value. Caller is responsible for locking the local
+ * withe the retrieved value. Caller is responsible for locking the local
  * relation.
  */
 static XLogRecPtr
@@ -115,9 +115,9 @@ copy_sequence(WalReceiverConn *conn, Relation rel)
 					 "   AND c.relname = %s",
 					 quote_literal_cstr(nspname),
 					 quote_literal_cstr(relname));
+
 	res = walrcv_exec(conn, cmd.data,
 					  lengthof(tableRow), tableRow);
-
 	if (res->status != WALRCV_OK_TUPLES)
 		ereport(ERROR,
 				(errcode(ERRCODE_CONNECTION_FAILURE),
@@ -176,8 +176,9 @@ LogicalRepSyncSequences(void)
  */
 #define MAX_SEQUENCES_SYNC_PER_BATCH 100
 
-	/* Get the sequences that should be synchronized. */
 	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
 	sequences = GetSubscriptionRelations(subid, SUB_REL_KIND_SEQUENCE, true);
 
 	/* Allocate the tracking info in a permanent memory context. */
@@ -191,7 +192,6 @@ LogicalRepSyncSequences(void)
 	}
 	MemoryContextSwitchTo(oldctx);
 
-
 	CommitTransactionCommand();
 
 	/* Is the use of a password mandatory? */
@@ -272,8 +272,8 @@ LogicalRepSyncSequences(void)
 		table_close(sequence_rel, NoLock);
 
 		/*
-		 * Verify whether the current batch of sequences is synchronized or if
-		 * there are no remaining sequences to synchronize.
+		 * Have we reached the end of the current batch of sequences,
+		 * or last remaining sequences to synchronize?
 		 */
 		if ((((curr_seq + 1) % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
 			(curr_seq + 1) == seq_count)
@@ -292,6 +292,7 @@ LogicalRepSyncSequences(void)
 							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
 			}
 
+			/* Commit this batch, and prepare for next batch. */
 			CommitTransactionCommand();
 			start_txn = true;
 		}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 313e5eb..9f77a78 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -143,6 +143,8 @@ void
 pg_attribute_noreturn()
 finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -171,7 +173,7 @@ finish_sync_worker(LogicalRepWorkerType wtype)
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
-	/* No need to set the sequence failure time when it is a clean exit */
+	/* This is a clean exit, so no need to set a sequence failure time. */
 	if (wtype == WORKERTYPE_SEQUENCESYNC)
 		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
 
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
index 8f4871d..ecc17f5 100644
--- a/src/test/subscription/t/034_sequences.pl
+++ b/src/test/subscription/t/034_sequences.pl
@@ -127,7 +127,6 @@ $result = $node_subscriber->safe_psql(
 	'postgres', qq(
 	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
 ));
-
 $node_subscriber->poll_query_until('postgres', $synced_query)
   or die "Timed out while waiting for subscriber to synchronize data";
 
#88shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#86)
Re: Logical Replication of sequences

On Thu, Jul 25, 2024 at 12:08 PM shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Jul 25, 2024 at 9:06 AM vignesh C <vignesh21@gmail.com> wrote:

The attached v20240725 version patch has the changes for the same.

Thank You for addressing the comments. Please review below issues:

1) Sub ahead of pub due to wrong initial sync of last_value for
non-incremented sequences. Steps at [1]
2) Sequence's min value is not honored on sub during replication. Steps at [2]

One more issue:
3) Sequence datatype's range is not honored on sub during
replication, while it is honored for tables.

Behaviour for tables:
---------------------
Pub: create table tab1( i integer);
Sub: create table tab1( i smallint);

Pub: insert into tab1 values(generate_series(1, 32768));

Error on sub:
2024-07-25 10:38:06.446 IST [178680] ERROR: value "32768" is out of
range for type smallint

---------------------
Behaviour for sequences:
---------------------

Pub:
CREATE SEQUENCE myseq_i as integer INCREMENT 10000 START 1;

Sub:
CREATE SEQUENCE myseq_i as smallint INCREMENT 10000 START 1;

Pub:
SELECT nextval('myseq_i');
SELECT nextval('myseq_i');
SELECT nextval('myseq_i');
SELECT nextval('myseq_i');
SELECT nextval('myseq_i'); -->brings value to 40001

Sub:
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
SELECT * from pg_sequences; -->last_val reached till 40001, while the
range is till 32767.

thanks
Shveta

#89Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#85)
1 attachment(s)
Re: Logical Replication of sequences

Here are some review comments for latest patch v20240725-0002

======
doc/src/sgml/ref/create_publication.sgml

nitpick - tweak to the description of the example.

======
src/backend/parser/gram.y

preprocess_pub_all_objtype_list:
nitpick - typo "allbjects_list"
nitpick - reword function header
nitpick - /alltables/all_tables/
nitpick - /allsequences/all_sequences/
nitpick - I think code is safe as-is because makeNode internally does
palloc0, but OTOH adding Assert would be nicer just to remove any
doubts.

======
src/bin/psql/describe.c

1.
+ /* Print any publications */
+ if (pset.sversion >= 180000)
+ {
+ int tuples = 0;

No need to assign value 0 here, because this will be unconditionally
assigned before use anyway.

~~~~

2. describePublications

has_pubviaroot = (pset.sversion >= 130000);
+ has_pubsequence = (pset.sversion >= 18000);

That's a bug! Should be 180000, not 18000.

======

And, please see the attached diffs patch, which implements the
nitpicks mentioned above.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240726_SEQ_0002.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240726_SEQ_0002.txtDownload
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 7dcfe37..783874f 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -420,7 +420,7 @@ CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
 
   <para>
-   Create a publication that synchronizes all the sequences:
+   Create a publication that publishes all sequences for synchronization:
 <programlisting>
 CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
 </programlisting>
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 585f61e..9b3cad1 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,9 +215,9 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
-static void preprocess_pub_all_objtype_list(List *allbjects_list,
-											bool *alltables,
-											bool *allsequences,
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
 											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
@@ -19440,39 +19440,42 @@ parsePartitionStrategy(char *strategy)
 }
 
 /*
- * Process all_objects_list to check if the options have been specified more than
- * once and set alltables/allsequences.
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
  */
 static void
-preprocess_pub_all_objtype_list(List *all_objects_list, bool *alltables,
-								bool *allsequences, core_yyscan_t yyscanner)
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
 {
 	if (!all_objects_list)
 		return;
 
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
 	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
 	{
 		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
 		{
-			if (*alltables)
+			if (*all_tables)
 				ereport(ERROR,
 						errcode(ERRCODE_SYNTAX_ERROR),
 						errmsg("invalid publication object list"),
 						errdetail("TABLES can be specified only once."),
 						parser_errposition(obj->location));
 
-			*alltables = true;
+			*all_tables = true;
 		}
 		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
 		{
-			if (*allsequences)
+			if (*all_sequences)
 				ereport(ERROR,
 						errcode(ERRCODE_SYNTAX_ERROR),
 						errmsg("invalid publication object list"),
 						errdetail("SEQUENCES can be specified only once."),
 						parser_errposition(obj->location));
 
-			*allsequences = true;
+			*all_sequences = true;
 		}
 	}
 }
#90Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#85)
1 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh,

There are still pending changes from my previous review of the
0720-0003 patch [1]0720-0003 review - /messages/by-id/CAHut+PsfsfzyBrmo8E43qFMp9_bmen2tuCsNYN8sX=fa86SdfA@mail.gmail.com, but here are some new review comments for your
latest patch v20240525-0003.

======
doc/src/sgml/catalogs.sgml

nitpick - fix plurals and tweak the description.

~~~

1.
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE
SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION
... REFRESH
+   This catalog only contains tables and sequences known to the subscription
+   after running either
+   <link linkend="sql-createsubscription"><command>CREATE
SUBSCRIPTION</command></link>
+   or <link linkend="sql-altersubscription"><command>ALTER
SUBSCRIPTION ... REFRESH
    PUBLICATION</command></link>.
   </para>

Shouldn't this mention "REFRESH PUBLICATION SEQUENCES" too?

======
src/backend/commands/sequence.c

SetSequenceLastValue:
nitpick - maybe change: /log_cnt/new_log_cnt/ for consistency with the
other parameter, and to emphasise the old log_cnt is overwritten

======
src/backend/replication/logical/sequencesync.c

2.
+/*
+ * fetch_remote_sequence_data
+ *
+ * Fetch sequence data (current state) from the remote node, including
+ * the latest sequence value from the publisher and the Page LSN for the
+ * sequence.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid,
+    int64 *log_cnt, XLogRecPtr *lsn)

2a.
Now you are also returning the 'log_cnt' but that is not mentioned by
the function comment.

~

2b.
Is it better to name these returned by-ref ptrs like 'ret_log_cnt',
and 'ret_lsn' to emphasise they are output variables? YMMV.

~~~

3.
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))

This will have one-and-only-one tuple for the discovered sequence,
won't it? So, why is this a while loop?

======
src/include/commands/sequence.h

nitpick - maybe change: /log_cnt/new_log_cnt/ (same as earlier in this post)

======
src/test/subscription/t/034_sequences.pl

4.
Q. Should we be suspicious that log_cnt changes from '32' to '31', or
is there a valid explanation? It smells like some calculation is
off-by-one, but without debugging I can't tell if it is right or
wrong.

======
Please also see the attached diffs patch, which implements the
nitpicks mentioned above.

======
[1]: 0720-0003 review - /messages/by-id/CAHut+PsfsfzyBrmo8E43qFMp9_bmen2tuCsNYN8sX=fa86SdfA@mail.gmail.com
/messages/by-id/CAHut+PsfsfzyBrmo8E43qFMp9_bmen2tuCsNYN8sX=fa86SdfA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240726_SEQ_0003.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240726_SEQ_0003.txtDownload
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 19d04b1..dcd0b98 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,8 +8102,8 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated tables and sequences in each subscription.  This
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
    is a many-to-many mapping.
   </para>
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a3e7c79..f292fbc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -342,7 +342,7 @@ ResetSequence(Oid seq_relid)
  * logical replication.
  */
 void
-SetSequenceLastValue(Oid seq_relid, int64 new_last_value, int64 log_cnt)
+SetSequenceLastValue(Oid seq_relid, int64 new_last_value, int64 new_log_cnt)
 {
 	SeqTable        elm;
 	Relation        seqrel;
@@ -370,7 +370,7 @@ SetSequenceLastValue(Oid seq_relid, int64 new_last_value, int64 log_cnt)
 
 	seq->last_value = new_last_value;
 	seq->is_called = true;
-	seq->log_cnt = log_cnt;
+	seq->log_cnt = new_log_cnt;
 
 	MarkBufferDirty(buf);
 
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index a302890..4c6aee0 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,7 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
-extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value, int64 log_cnt);
+extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value, int64 new_log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
#91vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#87)
3 attachment(s)
Re: Logical Replication of sequences

On Thu, 25 Jul 2024 at 12:54, Peter Smith <smithpb2250@gmail.com> wrote:

Hi, here are more review comments for patch v20240720-0003.
7.
The logic seems over-complicated. For example, why is the sequence
list *always* fetched, but the tables list is only sometimes fetched?
Furthermore, this 'refresh_all_sequences' parameter seems to have a
strange interference with tables (e.g. even though it is possible to
refresh all tables and sequences at the same time). It is as if the
meaning is 'refresh_publication_sequences' yet it is not called that
(???)

These gripes may be related to my other thread [1] about the new ALTER
syntax. (I feel that there should be the ability to refresh ALL TABLES
or ALL SEQUENCES independently if the user wants to). IIUC, it would
simplify this function logic as well as being more flexible. Anyway, I
will leave the discussion about syntax to that other thread.

1) ALTER SUBCRIPTION ... REFRESH PUBLICATION
This command will refresh both tables and sequences. It will remove
stale tables and sequences and include newly added tables and
sequences.
2) ALTER SUBCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command will refresh only sequences. It will remove stale
sequences and synchronize all sequences including the existing
sequences.
So the table will be fetched only for the first command.
I have changed refresh_publication_sequences parameter to tables,
sequences, all_relations with this the function should be easier to
understand and remove any confusions.

~

9.
ereport(DEBUG1,
- (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ (errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name,
+ get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table")));

IIUC prior conDitions mean get_rel_relkind(relid) == RELKIND_SEQUENCE
will be impossible here.

Consider a scenario where logical replication is setup with sequences
seq1, seq2.
Now drop sequence seq1 and do "ALTER SUBSCRIPTION sub REFRESH PUBLICATION"
It will hit this code to generate the log:
DEBUG: sequence "public.seq1" removed from subscription "test1"

======
.../replication/logical/sequencesync.c

13. fetch_remote_sequence_data

The "current state" mentioned in the function comment is a bit vague.
Can't tell from this comment what it is returning without looking
deeper into the function code.

Added more comments to clarify.

~~~

15. process_syncing_sequences_for_apply

+ /* We need up-to-date sync state info for subscription sequences here. */
+ FetchTableStates(&started_tx, SUB_REL_KIND_ALL);

Should that say SUB_REL_KIND_SEQUENCE?

We cannot pass SUB_REL_KIND_SEQUENCE here because the
pg_subscription_rel table is shared between sequences and tables. As
changes to either sequences or relations can affect the validity of
relation states, we update both table_states_not_ready and
sequence_states_not_ready simultaneously to ensure consistency, rather
than updating them separately. I have removed the relation kind
parameter now. Fetch tables is called to fetch all tables and
sequences before calling process_syncing_tables_for_apply and
process_syncing_sequences_for_apply now.

~

16.
+ /*
+ * If there are free sync worker slot(s), start a new sequence
+ * sync worker, and break from the loop.
+ */
+ if (nsyncworkers < max_sync_workers_per_subscription)

Should this "if" have some "else" code to log a warning if we have run
out of free workers? Otherwise, how will the user know that the system
may need tuning?

I felt no need to log here else we will get a lot of log messages
which might not be required. Similar logic is used for tablesync to in
process_syncing_tables_for_apply.

~~~

17. FetchTableStates

/* Fetch all non-ready tables. */
- rstates = GetSubscriptionRelations(MySubscription->oid, true);
+ rstates = GetSubscriptionRelations(MySubscription->oid, rel_type, true);

This feels risky. IMO there needs to be some prior Assert about the
rel_type. For example, if it happened to be SUB_REL_KIND_SEQUENCE then
this function code doesn't seem to make sense.

The pg_subscription_rel table is shared between sequences and tables.
As changes to either sequences or relations can affect the validity of
relation states, we update both table_states_not_ready and
sequence_states_not_ready simultaneously to ensure consistency, rather
than updating them separately. This will update both tables and
sequences that should be synced.

The rest of the comments are fixed. The attached v20240729 version
patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20240729-0001-Introduce-pg_sequence_state-and-SetSequenc.patchtext/x-patch; charset=US-ASCII; name=v20240729-0001-Introduce-pg_sequence_state-and-SetSequenc.patchDownload
From baa1158bb8222d36ffc8ad2b0e1bc265ae68f85e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240729 1/3] Introduce pg_sequence_state and
 SetSequenceLastValue functions for enhanced sequence management

This patch introduces a couple of new functions: pg_sequence_state function
allows retrieval of sequence values including LSN. The SetSequenceLastValue
function enables updating sequences with specified values.
---
 doc/src/sgml/func.sgml                 |  27 ++++
 src/backend/commands/sequence.c        | 169 +++++++++++++++++++++++--
 src/include/catalog/pg_proc.dat        |   8 ++
 src/include/commands/sequence.h        |   1 +
 src/test/regress/expected/sequence.out |  12 ++
 src/test/regress/sql/sequence.sql      |   2 +
 6 files changed, 211 insertions(+), 8 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index b669ab7f97..4ff403ee01 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19582,6 +19582,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ()
+        <returnvalue>record</returnvalue>
+        ( <parameter>lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>lsn</literal> is the
+        page LSN of the sequence, <literal>last_value</literal> is the most
+        recent value returned by <function>nextval</function> in the current
+        session, <literal>log_cnt</literal> shows how many fetches remain
+        before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 9f28d40466..bff990afa7 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -328,6 +330,83 @@ ResetSequence(Oid seq_relid)
 	sequence_close(seq_rel, NoLock);
 }
 
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction.  Caller is also
+ * responsible for permissions checking.
+ *
+ * Note: This function resembles do_setval but does not include the locking and
+ * verification steps, as those are managed in a slightly different manner for
+ * logical replication.
+ */
+void
+SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
+{
+	SeqTable        elm;
+	Relation        seqrel;
+	Buffer          buf;
+	HeapTupleData seqdatatuple;
+	Form_pg_sequence_data seq;
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	/* lock page buffer and read tuple */
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
+
+	/* check the comment above nextval_internal()'s equivalent call. */
+	if (RelationNeedsWAL(seqrel))
+	{
+		GetTopTransactionId();
+
+		if (XLogLogicalInfoActive())
+			GetCurrentTransactionId();
+	}
+
+	/* ready to change the on-disk (or really, in-buffer) tuple */
+	START_CRIT_SECTION();
+
+	seq->last_value = new_last_value;
+	seq->is_called = true;
+	seq->log_cnt = 0;
+
+	MarkBufferDirty(buf);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(seqrel))
+	{
+		xl_seq_rec      xlrec;
+		XLogRecPtr      recptr;
+		Page            page = BufferGetPage(buf);
+
+		XLogBeginInsert();
+		XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+		xlrec.locator = seqrel->rd_locator;
+
+		XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+		XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+		recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buf);
+
+	/*
+	 * Clear local cache so that we don't think we have cached numbers.
+	 * Note that we do not change the currval() state.
+	 */
+	elm->cached = elm->last;
+
+	relation_close(seqrel, NoLock);
+}
+
 /*
  * Initialize a sequence's relation with the specified tuple as content
  *
@@ -476,7 +555,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +637,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +766,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +1062,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1262,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1287,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1807,7 +1894,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1822,6 +1909,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 06b2f4ba66..5b250bad78 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..003f2e3413 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 2b47b7796b..cbcd65f499 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 674f5f1f66..5fcb36341d 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240729-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240729-0003-Enhance-sequence-synchronization-during-su.patchDownload
From 586999ecaf1d9a4e30e7b4f7872992f54efe80c8 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 24 Jul 2024 11:24:57 +0530
Subject: [PATCH v20240729 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/catalogs.sgml                    |  18 +-
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  40 +-
 doc/src/sgml/system-views.sgml                |  67 ++++
 src/backend/catalog/pg_publication.c          |  46 +++
 src/backend/catalog/pg_subscription.c         |  53 ++-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |   7 +-
 src/backend/commands/subscriptioncmds.c       | 238 +++++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  98 ++++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 369 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 219 +++++++++--
 src/backend/replication/logical/worker.c      |  23 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +-
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  23 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/t/034_sequences.pl      | 153 ++++++++
 29 files changed, 1295 insertions(+), 132 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..22b2a93535 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,18 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running either
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+   or <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
+   PUBLICATION</command></link> or <link linkend="sql-altersubscription">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8147,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 3dec0b7cfe..2bb4660336 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..1d3318a4f4 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1999,8 +1999,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..7868cad75f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,45 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +217,19 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a0b692bf1e..8373ebc5b0 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2185,6 +2190,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..4cb7fd346d 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -457,7 +458,7 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
@@ -468,7 +469,8 @@ HasSubscriptionRelations(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +482,19 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* If even a single tuple exists then the subscription has tables. */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +506,17 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * all_relations:
+ * If returning sequences, if all_relations=true get all sequences,
+ * otherwise only get sequences that are in 'init' state.
+ * If returning tables, if all_relation=true get all tables, otherwise
+ * only get tables that have not reached 'READY' state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_relations)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -514,11 +532,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	/* Get the relations that are not in ready state */
+	if (get_tables && !all_relations)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
 					CharGetDatum(SUBREL_STATE_READY));
+	/* Get the sequences that are in init state */
+	else if (get_sequences && !all_relations)
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(SUBREL_STATE_INIT));
 
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, nkeys, skey);
@@ -529,8 +554,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		subreltype;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		subreltype = get_rel_relkind(subrel->srrelid);
+
+		/* If only tables were requested, skip the sequences */
+		if (subreltype == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* If only sequences were requested, skip the tables */
+		if (subreltype != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index bff990afa7..341aa8c194 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -342,7 +342,8 @@ ResetSequence(Oid seq_relid)
  * logical replication.
  */
 void
-SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
+SetSequenceLastValue(Oid seq_relid, int64 new_last_value, int64 new_log_cnt,
+					 bool new_is_called)
 {
 	SeqTable        elm;
 	Relation        seqrel;
@@ -369,8 +370,8 @@ SetSequenceLastValue(Oid seq_relid, int64 new_last_value)
 	START_CRIT_SECTION();
 
 	seq->last_value = new_last_value;
-	seq->is_called = true;
-	seq->log_cnt = 0;
+	seq->log_cnt = new_log_cnt;
+	seq->is_called = new_is_called;
 
 	MarkBufferDirty(buf);
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..a768884b69 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -103,6 +103,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -751,6 +752,8 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List	   *sequences;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -781,6 +784,22 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 										InvalidXLogRecPtr, true);
 			}
 
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										 rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
@@ -847,12 +866,35 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * If 'copy_data' parameter is true, the function will set the state
+ * to "init"; otherwise, it will set the state to "ready".
+ *
+ * When 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * If 'all_relations' is true, it will mark all objects with "init" state
+ * for re-synchronization; otherwise, only the newly added tables and
+ * sequences will be updated based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool all_relations)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -889,10 +931,16 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +959,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -951,7 +1000,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										 get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table",
 										 rv->schemaname, rv->relname, sub->name)));
 			}
 		}
@@ -973,6 +1023,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,41 +1045,67 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequence-sync worker synchronizes all sequences,
+				 * so only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+			}
+
+			/*
+			 * If all the relations should be re-synchronized, then set the
+			 * state to init for re-synchronization. This is currently
+			 * supported only for sequences.
+			 */
+			else if (all_relations)
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
 			}
 		}
 
@@ -1039,6 +1116,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1504,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1519,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1560,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1579,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1635,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1804,7 +1900,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2258,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2436,62 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "      FROM pg_catalog.pg_publication_sequences s\n"
+						   "      WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9b3cad1cac..28b772df32 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10835,11 +10835,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..f8dd93a83a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..c564b36d7b 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -237,7 +237,8 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given
  * subscription id and relid.
  *
- * We are only interested in the leader apply worker or table sync worker.
+ * We are only interested in the leader apply worker, table sync worker, or
+ * sequence sync worker.
  */
 LogicalRepWorker *
 logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
@@ -267,6 +268,38 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 	return res;
 }
 
+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)
+{
+	LogicalRepWorker *res = NULL;
+
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	/* Search for attached worker for a given subscription id. */
+	for (int i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		/* Skip non sequence sync workers. */
+		if (!isSequenceSyncWorker(w))
+			continue;
+
+		if (w->in_use && w->subid == subid && (only_running && w->proc))
+		{
+			res = w;
+			break;
+		}
+	}
+
+	return res;
+}
+
 /*
  * Similar to logicalrep_worker_find(), but returns a list of all workers for
  * the subscription, instead of just one.
@@ -297,6 +330,25 @@ logicalrep_workers_find(Oid subid, bool only_running, bool acquire_lock)
 	return res;
 }
 
+/*
+ * Return the the apply worker for the given subscription id.
+ */
+static LogicalRepWorker *
+logicalrep_apply_worker_find(Oid subid, bool only_running)
+{
+	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
+
+	for (int i = 0; i < max_logical_replication_workers; i++)
+	{
+		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
+
+		if (isApplyWorker(w) && w->subid == subid && only_running && w->proc)
+			return w;
+	}
+
+	return NULL;
+}
+
 /*
  * Start new logical replication background worker, if possible.
  *
@@ -317,6 +369,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -324,10 +377,12 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - must be valid worker type
 	 * - tablesync workers are only ones to have relid
 	 * - parallel apply worker is the only kind of subworker
+	 * - sequencesync workers will not have relid
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
 	Assert(is_tablesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
+	Assert(!is_sequencesync_worker || !OidIsValid(relid));
 
 	ereport(DEBUG1,
 			(errmsg_internal("starting logical replication worker for subscription \"%s\"",
@@ -402,7 +457,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +545,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +553,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -815,6 +879,27 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequence sync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequence sync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_apply_worker_find(MyLogicalRepWorker->subid, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +948,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (isTableSyncWorker(w) && w->subid == subid)
 			res++;
 	}
 
@@ -1314,7 +1399,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1442,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..124c3d6788
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,369 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieve the last_value, log_cnt, page_lsn and is_called of the sequence
+ * from the remote node. The last_value will be returned directly, while
+ * log_cnt, is_called and page_lsn will be provided through the output
+ * parameters log_cnt, is_called and lsn, respectively.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
+						   char *relname, int64 *log_cnt, bool *is_called,
+						   XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[4] = {INT8OID, INT8OID, BOOLOID, LSNOID};
+	int64		value = (Datum) 0;
+	bool		isnull;
+
+	initStringInfo(&cmd);
+
+	appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called, page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 4, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*lsn = DatumGetInt64(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the retrieved value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	int64		log_cnt;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	bool		is_called;
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	sequence_value = fetch_remote_sequence_data(conn, remoteid, nspname,
+												relname, &log_cnt, &is_called,
+												&lsn);
+
+	SetSequenceLastValue(RelationGetRelid(rel), sequence_value, log_cnt,
+						 is_called);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequence sync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences,
+		 * or last remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequence sync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..cfb066ee26 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -130,19 +130,22 @@ typedef enum
 	SYNC_TABLE_STATE_VALID,
 } SyncingTablesState;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+static SyncingTablesState relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+static List *sequence_states_not_ready = NIL;
+static bool FetchTableStates(void);
 
 static StringInfo copybuf = NULL;
 
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -157,15 +160,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -280,7 +292,7 @@ wait_for_worker_state_change(char expected_state)
 void
 invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 }
 
 /*
@@ -387,7 +399,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -429,9 +441,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -464,6 +473,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert (get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -477,11 +494,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -660,6 +672,105 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence synchronization worker running already, no need to
+ * start a new one; the existing sequence sync worker will synchronize all the
+ * sequences. If there are still any sequences to be synced after the sequence
+ * sync worker exited, then a new sequence sync worker can be started in the
+ * next iteration. To prevent starting the sequence sync worker at a high
+ * frequency after a failure, we store its last failure time. We start the sync
+ * worker for the same relation after waiting at least
+ * wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/*
+	 * Start sequence sync worker if there is not one already.
+	 */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_sequence_sync_worker_find(MyLogicalRepWorker->subid,
+														  true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int			nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence
+			 * sync worker, and break from the loop.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				TimestampTz now = GetCurrentTimestamp();
+
+				if (!MyLogicalRepWorker->sequencesync_failure_time ||
+					TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+											   now, wal_retrieve_retry_interval))
+				{
+					MyLogicalRepWorker->sequencesync_failure_time = 0;
+					logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											 MyLogicalRepWorker->dbid,
+											 MySubscription->oid,
+											 MySubscription->name,
+											 MyLogicalRepWorker->userid,
+											 InvalidOid,
+											 DSM_HANDLE_INVALID);
+					break;
+				}
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -682,7 +793,19 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchTableStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -1320,7 +1443,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1564,39 +1687,50 @@ copy_table_done:
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * Copy tables that are not ready into table_states_not_ready and sequences
+ * that are not ready into sequence_states_not_ready. The pg_subscription_rel
+ * table is shared between sequences and tables. Because changes to either
+ * sequences or relations can affect the validity of relation states, we update
+ * both table_states_not_ready and sequence_states_not_ready simultaneously
+ * to ensure consistency, rather than updating them separately. Returns true if
+ * subscription has 1 or more tables, else false.
  *
  * Note: If this function started the transaction (indicated by the parameter)
  * then it is the caller's responsibility to commit it.
  */
 static bool
-FetchTableStates(bool *started_tx)
+FetchTableStates(void)
 {
 	static bool has_subrels = false;
+	bool started_tx = false;
 
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_TABLE_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/*
+		 * Fetch the tables that are in non-ready state and the sequences that
+		 * are in init state.
+		 */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -1604,7 +1738,11 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -1625,8 +1763,14 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
 	}
 
 	return has_subrels;
@@ -1709,7 +1853,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1717,7 +1861,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1731,17 +1875,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchTableStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index ec96b5fe85..7491afdb48 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4531,8 +4536,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4611,6 +4616,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4619,14 +4628,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4658,6 +4670,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  invalidate_syncing_table_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index be0ed1fc27..0c5601af82 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 5b250bad78..23445d7aa9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11995,6 +11995,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..58abed907a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_relations);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 003f2e3413..c8f24396a7 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,7 +60,8 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
-extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value);
+extern void SetSequenceLastValue(Oid seq_relid, int64 new_last_value,
+								 int64 new_log_cnt, bool new_is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..adb1c6e32b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -242,6 +245,8 @@ extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
+extern LogicalRepWorker *logicalrep_sequence_sync_worker_find(Oid subid,
+															  bool only_running);
 extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
@@ -253,6 +258,10 @@ extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
+
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -326,15 +335,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 5201280669..358c76e78e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1443,6 +1443,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..d7cee9dcb1
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,153 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(
+	allows_streaming => 'logical',
+	checkpoint_timeout => '1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are syned
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+done_testing();
-- 
2.34.1

v20240729-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240729-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From c02e429c0927262c1868763c1d4b9e544c5aa74d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240729 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  37 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 697 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..783874fb75 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,13 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..9b3cad1cac 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10561,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10586,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10704,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19439,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b8b1888bd3..2fbaf027a9 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4212,6 +4212,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4228,23 +4229,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4256,6 +4263,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4275,6 +4283,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4322,8 +4332,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..5054be0fd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 891face1b6..be0ed1fc27 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b4d7f9217c..90326c6e53 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2247,6 +2247,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

#92vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#89)
Re: Logical Replication of sequences

On Fri, 26 Jul 2024 at 08:04, Peter Smith <smithpb2250@gmail.com> wrote:

Here are some review comments for latest patch v20240725-0002

======
doc/src/sgml/ref/create_publication.sgml

nitpick - tweak to the description of the example.

======
src/backend/parser/gram.y

preprocess_pub_all_objtype_list:
nitpick - typo "allbjects_list"
nitpick - reword function header
nitpick - /alltables/all_tables/
nitpick - /allsequences/all_sequences/
nitpick - I think code is safe as-is because makeNode internally does
palloc0, but OTOH adding Assert would be nicer just to remove any
doubts.

======
src/bin/psql/describe.c

1.
+ /* Print any publications */
+ if (pset.sversion >= 180000)
+ {
+ int tuples = 0;

No need to assign value 0 here, because this will be unconditionally
assigned before use anyway.

~~~~

2. describePublications

has_pubviaroot = (pset.sversion >= 130000);
+ has_pubsequence = (pset.sversion >= 18000);

That's a bug! Should be 180000, not 18000.

======

And, please see the attached diffs patch, which implements the
nitpicks mentioned above.

These are handled in the v20240729 version attached at [1]/messages/by-id/CALDaNm3SucGGLe-B-a_aqWNWQZ-yfxFTiAA0JyP-SwX4jq9Y3A@mail.gmail.com.
[1]: /messages/by-id/CALDaNm3SucGGLe-B-a_aqWNWQZ-yfxFTiAA0JyP-SwX4jq9Y3A@mail.gmail.com

Regards,
Vignesh

#93vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#90)
Re: Logical Replication of sequences

On Fri, 26 Jul 2024 at 11:46, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

There are still pending changes from my previous review of the
0720-0003 patch [1], but here are some new review comments for your
latest patch v20240525-0003.
2b.
Is it better to name these returned by-ref ptrs like 'ret_log_cnt',
and 'ret_lsn' to emphasise they are output variables? YMMV.

I felt this is ok as we have mentioned in function header too

======
src/test/subscription/t/034_sequences.pl

4.
Q. Should we be suspicious that log_cnt changes from '32' to '31', or
is there a valid explanation? It smells like some calculation is
off-by-one, but without debugging I can't tell if it is right or
wrong.

It works like this: for every 33 nextval we will get log_cnt as 0. So
for 33 * 6(198) log_cnt will be 0, then for 199 log_cnt will be 32 and
for 200 log_cnt will be 31. This pattern repeats, so this is ok.

These are handled in the v20240729 version attached at [1]/messages/by-id/CALDaNm3SucGGLe-B-a_aqWNWQZ-yfxFTiAA0JyP-SwX4jq9Y3A@mail.gmail.com.
[1]: /messages/by-id/CALDaNm3SucGGLe-B-a_aqWNWQZ-yfxFTiAA0JyP-SwX4jq9Y3A@mail.gmail.com

Regards,
Vignesh

#94vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#86)
3 attachment(s)
Re: Logical Replication of sequences

On Thu, 25 Jul 2024 at 12:08, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Jul 25, 2024 at 9:06 AM vignesh C <vignesh21@gmail.com> wrote:

The attached v20240725 version patch has the changes for the same.

Thank You for addressing the comments. Please review below issues:

1) Sub ahead of pub due to wrong initial sync of last_value for
non-incremented sequences. Steps at [1]
2) Sequence's min value is not honored on sub during replication. Steps at [2]

[1]:
-----------
on PUB:
CREATE SEQUENCE myseq001 INCREMENT 5 START 100;
SELECT * from pg_sequences; -->shows last_val as NULL

on SUB:
CREATE SEQUENCE myseq001 INCREMENT 5 START 100;
SELECT * from pg_sequences; -->correctly shows last_val as NULL
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
SELECT * from pg_sequences; -->wrongly updates last_val to 100; it is
still NULL on Pub.

Thus , SELECT nextval('myseq001') on pub gives 100, while on sub gives 105.
-----------

[2]:
-----------
Pub:
CREATE SEQUENCE myseq0 INCREMENT 5 START 10;
SELECT * from pg_sequences;

Sub:
CREATE SEQUENCE myseq0 INCREMENT 5 MINVALUE 100;

Pub:
SELECT nextval('myseq0');
SELECT nextval('myseq0');

Sub:
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
--check 'last_value', it is 15 while min_value is 100
SELECT * from pg_sequences;

Thanks for reporting this, these issues are fixed in the attached
v20240730_2 version patch.

Regards,
Vignesh

Attachments:

v20240730_2-0001-Introduce-pg_sequence_state-function-for.patchtext/x-patch; charset=US-ASCII; name=v20240730_2-0001-Introduce-pg_sequence_state-function-for.patchDownload
From 2e8f82f06d0a6e2cba4acc6cf6fd1f4a1d6525d6 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 12:15:16 +0530
Subject: [PATCH v20240730_2 1/3] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 27 ++++++++
 src/backend/commands/sequence.c        | 92 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 8 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index b39f97dc8d..6b951f8f93 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19582,6 +19582,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ()
+        <returnvalue>record</returnvalue>
+        ( <parameter>lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>lsn</literal> is the
+        page LSN of the sequence, <literal>last_value</literal> is the most
+        recent value returned by <function>nextval</function> in the current
+        session, <literal>log_cnt</literal> shows how many fetches remain
+        before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 9f28d40466..32096725e2 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1807,7 +1817,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1822,6 +1832,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 06b2f4ba66..5b250bad78 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 
 { oid => '275', descr => 'return the next oid for a system table',
   proname => 'pg_nextoid', provolatile => 'v', proparallel => 'u',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 2b47b7796b..cbcd65f499 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 674f5f1f66..5fcb36341d 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240730_2-0002-Introduce-ALL-SEQUENCES-support-for-Post.patchtext/x-patch; charset=US-ASCII; name=v20240730_2-0002-Introduce-ALL-SEQUENCES-support-for-Post.patchDownload
From bc16087db27b884a5b7db791a71d17c49018e87d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240730_2 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  37 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 697 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..783874fb75 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,13 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..9b3cad1cac 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10561,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10586,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10704,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19439,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b8b1888bd3..2fbaf027a9 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4212,6 +4212,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4228,23 +4229,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4256,6 +4263,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4275,6 +4283,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4322,8 +4332,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d3dd8784d6..5054be0fd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 891face1b6..be0ed1fc27 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 3deb6113b8..bd522015ce 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2247,6 +2247,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240730_2-0003-Enhance-sequence-synchronization-during-.patchtext/x-patch; charset=US-ASCII; name=v20240730_2-0003-Enhance-sequence-synchronization-during-.patchDownload
From 610db32f437935a543b38fea5986b65ed5f9f761 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 29 Jul 2024 12:37:53 +0530
Subject: [PATCH v20240730_2 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/catalogs.sgml                    |  18 +-
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  40 +-
 doc/src/sgml/system-views.sgml                |  67 ++++
 src/backend/catalog/pg_publication.c          |  46 +++
 src/backend/catalog/pg_subscription.c         |  57 ++-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  14 +-
 src/backend/commands/subscriptioncmds.c       | 240 ++++++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  63 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 368 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 227 ++++++++---
 src/backend/replication/logical/worker.c      |  23 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  25 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/t/034_sequences.pl      | 153 ++++++++
 29 files changed, 1267 insertions(+), 147 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..22b2a93535 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,18 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running either
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+   or <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
+   PUBLICATION</command></link> or <link linkend="sql-altersubscription">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8147,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 57cd7bb972..699dd22b7c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..1d3318a4f4 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1999,8 +1999,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..7868cad75f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,45 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +217,19 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a0b692bf1e..8373ebc5b0 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2185,6 +2190,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..bc6d18b8b2 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,7 +460,7 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
@@ -468,7 +471,8 @@ HasSubscriptionRelations(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,19 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* If even a single tuple exists then the subscription has tables. */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +508,17 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * all_relations:
+ * If returning sequences, if all_relations=true get all sequences,
+ * otherwise only get sequences that are in 'init' state.
+ * If returning tables, if all_relation=true get all tables, otherwise
+ * only get tables that have not reached 'READY' state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_relations)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -514,11 +534,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	/* Get the relations that are not in ready state */
+	if (get_tables && !all_relations)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
 					CharGetDatum(SUBREL_STATE_READY));
+	/* Get the sequences that are in init state */
+	else if (get_sequences && !all_relations)
+		ScanKeyInit(&skey[nkeys++],
+					Anum_pg_subscription_rel_srsubstate,
+					BTEqualStrategyNumber, F_CHAREQ,
+					CharGetDatum(SUBREL_STATE_INIT));
 
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, nkeys, skey);
@@ -529,8 +556,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		subreltype;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		subreltype = get_rel_relkind(subrel->srrelid);
+
+		/* If only tables were requested, skip the sequences */
+		if (subreltype == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* If only sequences were requested, skip the tables */
+		if (subreltype != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 32096725e2..e8bd53cca1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * logcnt is currently used only by sequence syncworker to set the log_cnt for
+ * sequences while synchronization of sequence values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+do_setval(Oid relid, int64 next, bool iscalled, int64 logcnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1013,7 +1015,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 	seq->last_value = next;		/* last fetched number */
 	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->log_cnt = (logcnt == SEQ_LOG_CNT_INVALID) ? 0: logcnt;
 
 	MarkBufferDirty(buf);
 
@@ -1053,7 +1055,7 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	do_setval(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	do_setval(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..984f72dc5a 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -103,6 +103,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -751,6 +752,8 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List	   *sequences;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -781,6 +784,22 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 										InvalidXLogRecPtr, true);
 			}
 
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										 rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
 			/*
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
@@ -847,12 +866,35 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * If 'copy_data' parameter is true, the function will set the state
+ * to "init"; otherwise, it will set the state to "ready".
+ *
+ * When 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * If 'all_relations' is true, it will mark all objects with "init" state
+ * for re-synchronization; otherwise, only the newly added tables and
+ * sequences will be updated based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool all_relations)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -889,10 +931,16 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +959,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -951,7 +1000,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										 get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table",
 										 rv->schemaname, rv->relname, sub->name)));
 			}
 		}
@@ -973,6 +1023,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,41 +1045,67 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequence-sync worker synchronizes all sequences,
+				 * so only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table",
+										 get_namespace_name(get_rel_namespace(relid)),
+										 get_rel_name(relid),
+										 sub->name)));
+			}
+
+			/*
+			 * If all the relations should be re-synchronized, then set the
+			 * state to init for re-synchronization. This is currently
+			 * supported only for sequences.
+			 */
+			else if (all_relations)
+			{
+				ereport(DEBUG1,
+						(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
+				UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+										   InvalidXLogRecPtr);
 			}
 		}
 
@@ -1039,6 +1116,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1504,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1519,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1560,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1579,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1635,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1877,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1900,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2258,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2436,62 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "      FROM pg_catalog.pg_publication_sequences s\n"
+						   "      WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+
+		ExecClearTuple(slot);
+	}
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9b3cad1cac..28b772df32 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10835,11 +10835,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..f8dd93a83a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..04d76e7f54 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,12 +235,14 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
+ * subscription id, relid and type.
  *
- * We are only interested in the leader apply worker or table sync worker.
+ * We are only interested in the leader apply worker, table sync worker, or
+ * sequence sync worker.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType type,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
@@ -257,7 +259,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == type && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +319,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -324,10 +327,12 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	 * - must be valid worker type
 	 * - tablesync workers are only ones to have relid
 	 * - parallel apply worker is the only kind of subworker
+	 * - sequencesync workers will not have relid
 	 */
 	Assert(wtype != WORKERTYPE_UNKNOWN);
 	Assert(is_tablesync_worker == OidIsValid(relid));
 	Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
+	Assert(!is_sequencesync_worker || !OidIsValid(relid));
 
 	ereport(DEBUG1,
 			(errmsg_internal("starting logical replication worker for subscription \"%s\"",
@@ -402,7 +407,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +495,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +503,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +626,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType type)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, type, false);
 
 	if (worker)
 	{
@@ -685,7 +699,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +829,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequence sync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequence sync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +899,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (isTableSyncWorker(w) && w->subid == subid)
 			res++;
 	}
 
@@ -1178,7 +1214,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1350,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1393,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..fc36bf9ef8
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,368 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieve the last_value, log_cnt, page_lsn and is_called of the sequence
+ * from the remote node. The last_value will be returned directly, while
+ * log_cnt, is_called and page_lsn will be provided through the output
+ * parameters log_cnt, is_called and lsn, respectively.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
+						   char *relname, int64 *log_cnt, bool *is_called,
+						   XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[4] = {INT8OID, INT8OID, BOOLOID, LSNOID};
+	int64		value = (Datum) 0;
+	bool		isnull;
+
+	initStringInfo(&cmd);
+
+	appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called, page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 4, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*lsn = DatumGetInt64(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the retrieved value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		sequence_value;
+	int64		log_cnt;
+	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	bool		is_called;
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	sequence_value = fetch_remote_sequence_data(conn, remoteid, nspname,
+												relname, &log_cnt, &is_called,
+												&lsn);
+
+	do_setval(RelationGetRelid(rel), sequence_value, is_called, log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequence sync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences,
+		 * or last remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequence sync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..3e162b9007 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -130,19 +130,22 @@ typedef enum
 	SYNC_TABLE_STATE_VALID,
 } SyncingTablesState;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+static SyncingTablesState relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+static List *sequence_states_not_ready = NIL;
+static bool FetchTableStates(void);
 
 static StringInfo copybuf = NULL;
 
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -157,15 +160,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -205,7 +217,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -252,7 +264,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -280,7 +292,7 @@ wait_for_worker_state_change(char expected_state)
 void
 invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 }
 
 /*
@@ -387,7 +399,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -429,9 +441,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -464,6 +473,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert (get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -477,11 +494,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -518,7 +530,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -660,6 +673,106 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If there is a sequence synchronization worker running already, no need to
+ * start a new one; the existing sequence sync worker will synchronize all the
+ * sequences. If there are still any sequences to be synced after the sequence
+ * sync worker exited, then a new sequence sync worker can be started in the
+ * next iteration. To prevent starting the sequence sync worker at a high
+ * frequency after a failure, we store its last failure time. We start the sync
+ * worker for the same relation after waiting at least
+ * wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/*
+	 * Start sequence sync worker if there is not one already.
+	 */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+		else
+		{
+			/*
+			 * Count running sync workers for this subscription, while we have
+			 * the lock.
+			 */
+			int			nsyncworkers =
+				logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+
+			/*
+			 * If there are free sync worker slot(s), start a new sequence
+			 * sync worker, and break from the loop.
+			 */
+			if (nsyncworkers < max_sync_workers_per_subscription)
+			{
+				TimestampTz now = GetCurrentTimestamp();
+
+				if (!MyLogicalRepWorker->sequencesync_failure_time ||
+					TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+											   now, wal_retrieve_retry_interval))
+				{
+					MyLogicalRepWorker->sequencesync_failure_time = 0;
+					logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											 MyLogicalRepWorker->dbid,
+											 MySubscription->oid,
+											 MySubscription->name,
+											 MyLogicalRepWorker->userid,
+											 InvalidOid,
+											 DSM_HANDLE_INVALID);
+					break;
+				}
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
  * Process possible state change(s) of tables that are being synchronized.
  */
@@ -682,7 +795,19 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchTableStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -1320,7 +1445,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1564,39 +1689,50 @@ copy_table_done:
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * Copy tables that are not ready into table_states_not_ready and sequences
+ * that are not ready into sequence_states_not_ready. The pg_subscription_rel
+ * table is shared between sequences and tables. Because changes to either
+ * sequences or relations can affect the validity of relation states, we update
+ * both table_states_not_ready and sequence_states_not_ready simultaneously
+ * to ensure consistency, rather than updating them separately. Returns true if
+ * subscription has 1 or more tables, else false.
  *
  * Note: If this function started the transaction (indicated by the parameter)
  * then it is the caller's responsibility to commit it.
  */
 static bool
-FetchTableStates(bool *started_tx)
+FetchTableStates(void)
 {
 	static bool has_subrels = false;
+	bool started_tx = false;
 
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_TABLE_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/*
+		 * Fetch the tables that are in non-ready state and the sequences that
+		 * are in init state.
+		 */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -1604,7 +1740,11 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -1625,8 +1765,14 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
 	}
 
 	return has_subrels;
@@ -1709,7 +1855,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1717,7 +1863,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1731,17 +1877,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchTableStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index ec96b5fe85..7491afdb48 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -4531,8 +4536,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4611,6 +4616,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4619,14 +4628,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4658,6 +4670,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  invalidate_syncing_table_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index be0ed1fc27..0c5601af82 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 5b250bad78..23445d7aa9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11995,6 +11995,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..58abed907a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_relations);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..71d8c76235 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		(-1)
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void do_setval(Oid relid, int64 next, bool iscalled, int64 logcnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..8a12ecb1fe 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType type,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,13 +250,18 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
+
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -326,15 +335,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 5201280669..358c76e78e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1443,6 +1443,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..d7cee9dcb1
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,153 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(
+	allows_streaming => 'logical',
+	checkpoint_timeout => '1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are syned
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+done_testing();
-- 
2.34.1

#95vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#88)
Re: Logical Replication of sequences

On Thu, 25 Jul 2024 at 15:41, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Jul 25, 2024 at 12:08 PM shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Jul 25, 2024 at 9:06 AM vignesh C <vignesh21@gmail.com> wrote:

The attached v20240725 version patch has the changes for the same.

Thank You for addressing the comments. Please review below issues:

1) Sub ahead of pub due to wrong initial sync of last_value for
non-incremented sequences. Steps at [1]
2) Sequence's min value is not honored on sub during replication. Steps at [2]

One more issue:
3) Sequence datatype's range is not honored on sub during
replication, while it is honored for tables.

Behaviour for tables:
---------------------
Pub: create table tab1( i integer);
Sub: create table tab1( i smallint);

Pub: insert into tab1 values(generate_series(1, 32768));

Error on sub:
2024-07-25 10:38:06.446 IST [178680] ERROR: value "32768" is out of
range for type smallint

---------------------
Behaviour for sequences:
---------------------

Pub:
CREATE SEQUENCE myseq_i as integer INCREMENT 10000 START 1;

Sub:
CREATE SEQUENCE myseq_i as smallint INCREMENT 10000 START 1;

Pub:
SELECT nextval('myseq_i');
SELECT nextval('myseq_i');
SELECT nextval('myseq_i');
SELECT nextval('myseq_i');
SELECT nextval('myseq_i'); -->brings value to 40001

Sub:
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
SELECT * from pg_sequences; -->last_val reached till 40001, while the
range is till 32767.

This issue is addressed in the v20240730_2 version patch attached at [1]/messages/by-id/CALDaNm3+XzHAbgyn8gmbBLK5goyv_uyGgHEsTQxRZ8bVk6nAEg@mail.gmail.com.
[1]: /messages/by-id/CALDaNm3+XzHAbgyn8gmbBLK5goyv_uyGgHEsTQxRZ8bVk6nAEg@mail.gmail.com

Regards,
Vignesh

#96Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#94)
1 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh,

Here are my review comments for your latest 0730_2* patches.

Patch v20240730_2-0001 looks good to me.

Patch v20240730_2-0002 looks good to me.

My comments for the v20240730_2-0003 patch are below:

//////////

GENERAL

1. Inconsistent terms

I've noticed there are many variations of how the sequence sync worker is known:
- "sequencesync worker"
- "sequence sync worker"
- "sequence-sync worker"
- "sequence synchronization worker"
- more?

We must settle on some standardized name.

AFAICT we generally use "table synchronization worker" in the docs,
and "tablesync worker" in the code and comments. IMO, we should do
same as that for sequences -- e.g. "sequence synchronization worker"
in the docs, and "sequencesync worker" in the code and comments.

======
doc/src/sgml/catalogs.sgml

nitpick - the links should jump directly to REFRESH PUBLICATION or
REFRESH PUBLICATION SEQUENCES. Currently they go to the top of the
ALTER SUBSCRIPTION page which is not as useful.

======
src/backend/commands/sequence.c

do_setval:
nitpick - minor wording in the function header
nitpick - change some param names to more closely resemble the fields
they get assigned to (/logcnt/log_cnt/, /iscalled/is_called/)

~

2.
  seq->is_called = iscalled;
- seq->log_cnt = 0;
+ seq->log_cnt = (logcnt == SEQ_LOG_CNT_INVALID) ? 0: logcnt;

The logic here for SEQ_LOG_CNT_INVALID seemed strange. Why not just
#define SEQ_LOG_CNT_INVALID as 0 in the first place if that is what
you will assign for invalid? Then you won't need to do anything here
except seq->log_cnt = log_cnt;

======
src/backend/catalog/pg_subscription.c

HasSubscriptionRelations:
nitpick - I think the comment "If even a single tuple exists..." is
not quite accurate. e.g. It also has to be the right kind of tuple.

~~

GetSubscriptionRelations:
nitpick - Give more description in the function header about the other
parameters.
nitpick - I felt that a better name for 'all_relations' is all_states.
Because in my mind *all relations* sounds more like when both
'all_tables' and 'all_sequences' are true.
nitpick - IMO add an Assert to be sure something is being fetched.
Assert(get_tables || get_sequences);
nitpick - Rephrase the "skip the tables" and "skip the sequences"
comments to be more aligned with the code condition.

~

3.
- if (not_ready)
+ /* Get the relations that are not in ready state */
+ if (get_tables && !all_relations)
  ScanKeyInit(&skey[nkeys++],
  Anum_pg_subscription_rel_srsubstate,
  BTEqualStrategyNumber, F_CHARNE,
  CharGetDatum(SUBREL_STATE_READY));
+ /* Get the sequences that are in init state */
+ else if (get_sequences && !all_relations)
+ ScanKeyInit(&skey[nkeys++],
+ Anum_pg_subscription_rel_srsubstate,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(SUBREL_STATE_INIT));

This is quite tricky, using multiple flags (get_tables and
get_sequences) in such a way. It might even be a bug -- e.g. Is the
'else' keyword correct? Otherwise, when both get_tables and
get_sequences are true, and all_relations is false, then the sequence
part wouldn't even get executed (???).

======
src/backend/commands/subscriptioncmds.c

CreateSubscription:
nitpick - let's move the 'tables' declaration to be beside the
'sequences' var for consistency. (in passing move other vars too)
nitpick - it's not strictly required for the patch, but let's change
the 'tables' loop to be consistent with the new sequences loop.

~~~

4. AlterSubscription_refresh

My first impression (from the function comment) is that these function
parameters are a bit awkward. For example,
- It says: If 'copy_data' parameter is true, the function will set
the state to "init"; otherwise, it will set the state to "ready".
- It also says: "If 'all_relations' is true, mark all objects with
"init" state..."
Those statements seem to clash. e.g. if copy_data is false but
all_relations is true, then what (???)

~

nitpick - tweak function comment wording.
nitpick - introduce a 'relkind' variable to avoid multiple calls of
get_rel_relkind(relid)
nitpick - use an existing 'relkind' variable instead of calling
get_rel_relkind(relid);
nitpick - add another comment about skipping (for dropping tablesync slots)

~

5.
+ /*
+ * If all the relations should be re-synchronized, then set the
+ * state to init for re-synchronization. This is currently
+ * supported only for sequences.
+ */
+ else if (all_relations)
+ {
+ ereport(DEBUG1,
+ (errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to
INIT state",
  get_namespace_name(get_rel_namespace(relid)),
  get_rel_name(relid),
  sub->name)));
+ UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+    InvalidXLogRecPtr);

(This is a continuation of my doubts regarding 'all_relations' in the
previous review comment #4 above)

Here are some more questions about it:

~

5a. Why is this an 'else' of the !bsearch? It needs more explanation
what this case means.

~

5b. Along with more description, it might be better to reverse the
!bsearch condition, so this ('else') code is not so distantly
separated from the condition.

~

5c. Saying "only supported for sequences" seems strange: e.g. what
would it even mean to "re-synchronize" tables? They would all have to
be truncated first -- so if re-sync for tables has no meaning maybe
the parameter is misnamed and should just be 'resync_all_sequences' or
similar? In any case, an Assert here might be good.

======
src/backend/replication/logical/launcher.c

logicalrep_worker_find:

nitpick - I feel the function comment "We are only interested in..."
is now redundant since you are passing the exact worker type you want.
nitpick - I added an Assert for the types you are expecting to look for
nitpick - The comment "Search for attached worker..." is stale now
because there are more search criteria
nitpick - IMO the "Skip parallel apply workers." code is no longer
needed now that you are matching the worker type.

~~~

6. logicalrep_worker_launch

  * - must be valid worker type
  * - tablesync workers are only ones to have relid
  * - parallel apply worker is the only kind of subworker
+ * - sequencesync workers will not have relid
  */
  Assert(wtype != WORKERTYPE_UNKNOWN);
  Assert(is_tablesync_worker == OidIsValid(relid));
  Assert(is_parallel_apply_worker == (subworker_dsm != DSM_HANDLE_INVALID));
+ Assert(!is_sequencesync_worker || !OidIsValid(relid));

On further reflection, is that added comment and added Assert even
needed? I think they can be removed because saying "tablesync workers
are only ones to have relid" seems to already cover what we needed to
say/assert.

~~~

logicalrep_worker_stop:
nitpick - /type/wtype/ for readability

~~~

7.
/*
* Count the number of registered (not necessarily running) sync workers
* for a subscription.
*/
int
logicalrep_sync_worker_count(Oid subid)

~

I thought this function should count the sequencesync worker as well.

======
.../replication/logical/sequencesync.c

fetch_remote_sequence_data:
nitpick - tweaked function comment
nitpick - /value/last_value/ for readability

~

8.
+ *lsn = DatumGetInt64(slot_getattr(slot, 4, &isnull));
+ Assert(!isnull);

Should that be DatumGetUInt64?

~~~

copy_sequence:
nitpick - tweak function header.
nitpick - renamed the sequence vars for consistency, and declared them
all together.

======
src/backend/replication/logical/tablesync.c

9.
 void
 invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
 {
- table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+ relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 }

I assume you changed the 'table_states_validity' name because this is
no longer exclusively for tables. So, should the function name also be
similarly changed?

~~~

process_syncing_sequences_for_apply:
nitpick - tweaked the function comment
nitpick - cannot just say "if there is not one already." a sequence
syn worker might not even be needed.
nitpick - added blank line for readability

~

10.
+ if (syncworker)
+ {
+ /* Now safe to release the LWLock */
+ LWLockRelease(LogicalRepWorkerLock);
+ break;
+ }
+ else
+ {

This 'else' can be removed if you wish to pull back all the indentation.

~~~

11.
process_syncing_tables(XLogRecPtr current_lsn)

Is the function name still OK given that is is now also syncing for sequences?

~~~

FetchTableStates:
nitpick - Reworded some of the function comment
nitpick - Function comment is stale because it is still referring to
the function parameter which this patch removed.
nitpick - tweak a comment

======
src/include/commands/sequence.h

12.
+#define SEQ_LOG_CNT_INVALID (-1)

See a previous review comment (#2 above) where I wondered why not use
value 0 for this.

~~~

13.
extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void do_setval(Oid relid, int64 next, bool iscalled, int64 logcnt);
extern void ResetSequenceCaches(void);

do_setval() was an OK function name when it was static, but as an
exposed API it seems like a terrible name. IMO rename it to something
like 'SetSequence' to match the other API functions nearby.

~

nitpick - same change to the parameter names as suggested for the
implementation.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240731_SEQ_003.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240731_SEQ_003.txtDownload
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 22b2a93..16c427e 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8110,9 +8110,10 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   <para>
    This catalog only contains tables and sequences known to the subscription
    after running either
-   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
-   or <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link> or <link linkend="sql-altersubscription">
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+  <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index bc6d18b..a1ee74b 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -490,7 +490,10 @@ HasSubscriptionRelations(Oid subid)
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
-		/* If even a single tuple exists then the subscription has tables. */
+		/*
+		 * Skip sequence tuples. If even a single table tuple
+		 * exists then the subscription has tables.
+		 */
 		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
 		{
 			has_subrels = true;
@@ -508,17 +511,21 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * all_relations:
- * If returning sequences, if all_relations=true get all sequences,
- * otherwise only get sequences that are in 'init' state.
- * If returning tables, if all_relation=true get all tables, otherwise
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
  * only get tables that have not reached 'READY' state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in 'init' state.
  *
  * The returned list is palloc'ed in the current memory context.
  */
 List *
 GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
-						 bool all_relations)
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -527,6 +534,9 @@ GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -535,13 +545,13 @@ GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
 				ObjectIdGetDatum(subid));
 
 	/* Get the relations that are not in ready state */
-	if (get_tables && !all_relations)
+	if (get_tables && !all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
 					CharGetDatum(SUBREL_STATE_READY));
 	/* Get the sequences that are in init state */
-	else if (get_sequences && !all_relations)
+	else if (get_sequences && !all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHAREQ,
@@ -561,11 +571,11 @@ GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 		subreltype = get_rel_relkind(subrel->srrelid);
 
-		/* If only tables were requested, skip the sequences */
+		/* Skip sequences if they were not requested */
 		if (subreltype == RELKIND_SEQUENCE && !get_sequences)
 			continue;
 
-		/* If only sequences were requested, skip the tables */
+		/* Skip tables if they were not requested */
 		if (subreltype != RELKIND_SEQUENCE && !get_tables)
 			continue;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index e8bd53c..2e63925 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -942,11 +942,11 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  *
- * logcnt is currently used only by sequence syncworker to set the log_cnt for
- * sequences while synchronization of sequence values from the publisher.
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
 void
-do_setval(Oid relid, int64 next, bool iscalled, int64 logcnt)
+do_setval(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -997,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled, int64 logcnt)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1014,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled, int64 logcnt)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = (logcnt == SEQ_LOG_CNT_INVALID) ? 0: logcnt;
+	seq->is_called = is_called;
+	seq->log_cnt = (log_cnt == SEQ_LOG_CNT_INVALID) ? 0: log_cnt;
 
 	MarkBufferDirty(buf);
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 984f72d..1c01a2b 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -735,9 +735,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -752,7 +749,9 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List	   *tables;
 			List	   *sequences;
+			char		table_state;
 
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
@@ -769,9 +768,8 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * info.
 			 */
 			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			foreach_ptr(RangeVar, rv, tables)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -884,9 +882,9 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
  * sequences that have been added or removed since the last subscription
  * creation or publication refresh.
  *
- * If 'all_relations' is true, it will mark all objects with "init" state
- * for re-synchronization; otherwise, only the newly added tables and
- * sequences will be updated based on the copy_data parameter.
+ * If 'all_relations' is true, mark all objects with "init" state
+ * for re-synchronization; otherwise, only update the newly added tables and
+ * sequences based on the copy_data parameter.
  */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
@@ -984,12 +982,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -1001,7 +1000,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
 						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
-										 get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 rv->schemaname, rv->relname, sub->name)));
 			}
 		}
@@ -1086,7 +1085,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				ereport(DEBUG1,
 						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
-										 get_rel_relkind(relid) == RELKIND_SEQUENCE ? "sequence" : "table",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
@@ -1116,6 +1115,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
 			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
 				continue;
 
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 04d76e7..5da5529 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -236,30 +236,27 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 /*
  * Walks the workers array and searches for one that matches given
  * subscription id, relid and type.
- *
- * We are only interested in the leader apply worker, table sync worker, or
- * sequence sync worker.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType type,
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
 					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			w->type == type && (!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -626,13 +623,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType type)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, type, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index fc36bf9..9aef45a 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -32,9 +32,11 @@
 /*
  * fetch_remote_sequence_data
  *
- * Retrieve the last_value, log_cnt, page_lsn and is_called of the sequence
- * from the remote node. The last_value will be returned directly, while
- * log_cnt, is_called and page_lsn will be provided through the output
+ * Retrieve sequence data (last_value, log_cnt, page_lsn and is_called)
+ * from the remote node.
+ *
+ * The sequence last_value will be returned directly, while
+ * log_cnt, is_called and page_lsn will be returned via the output
  * parameters log_cnt, is_called and lsn, respectively.
  */
 static int64
@@ -46,7 +48,7 @@ fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
 	StringInfoData cmd;
 	TupleTableSlot *slot;
 	Oid			tableRow[4] = {INT8OID, INT8OID, BOOLOID, LSNOID};
-	int64		value = (Datum) 0;
+	int64		last_value = (Datum) 0;
 	bool		isnull;
 
 	initStringInfo(&cmd);
@@ -70,7 +72,7 @@ fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
 				 errmsg("sequence \"%s.%s\" not found on publisher",
 						nspname, relname)));
 
-	value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
 	Assert(!isnull);
 
 	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
@@ -86,23 +88,24 @@ fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
 
 	walrcv_clear_result(res);
 
-	return value;
+	return last_value;
 }
 
 /*
  * Copy existing data of a sequence from publisher.
  *
  * Fetch the sequence value from the publisher and set the subscriber sequence
- * with the retrieved value. Caller is responsible for locking the local
+ * with the same value. Caller is responsible for locking the local
  * relation.
  */
 static XLogRecPtr
 copy_sequence(WalReceiverConn *conn, Relation rel)
 {
 	StringInfoData cmd;
-	int64		sequence_value;
-	int64		log_cnt;
-	XLogRecPtr	lsn = InvalidXLogRecPtr;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_lsn = InvalidXLogRecPtr;
 	WalRcvExecResult *res;
 	Oid			tableRow[] = {OIDOID, CHAROID};
 	TupleTableSlot *slot;
@@ -111,7 +114,6 @@ copy_sequence(WalReceiverConn *conn, Relation rel)
 	bool		isnull;
 	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
 	char	   *relname = RelationGetRelationName(rel);
-	bool		is_called;
 
 	/* Fetch Oid. */
 	initStringInfo(&cmd);
@@ -148,14 +150,14 @@ copy_sequence(WalReceiverConn *conn, Relation rel)
 	ExecDropSingleTupleTableSlot(slot);
 	walrcv_clear_result(res);
 
-	sequence_value = fetch_remote_sequence_data(conn, remoteid, nspname,
-												relname, &log_cnt, &is_called,
-												&lsn);
+	seq_last_value = fetch_remote_sequence_data(conn, remoteid, nspname,
+												relname, &seq_log_cnt, &seq_is_called,
+												&seq_lsn);
 
-	do_setval(RelationGetRelid(rel), sequence_value, is_called, log_cnt);
+	do_setval(RelationGetRelid(rel), seq_last_value, seq_is_called, seq_log_cnt);
 
 	/* return the LSN when the sequence state was set */
-	return lsn;
+	return seq_lsn;
 }
 
 /*
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3e162b9..6e7ed8e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -680,7 +680,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
  * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
  * synchronization for them.
  *
- * If there is a sequence synchronization worker running already, no need to
+ * If a sequence synchronization worker is running already, there is no need to
  * start a new one; the existing sequence sync worker will synchronize all the
  * sequences. If there are still any sequences to be synced after the sequence
  * sync worker exited, then a new sequence sync worker can be started in the
@@ -697,7 +697,7 @@ process_syncing_sequences_for_apply()
 	Assert(!IsTransactionState());
 
 	/*
-	 * Start sequence sync worker if there is not one already.
+	 * Start the sequence sync worker if needed, and there is not one already.
 	 */
 	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
 	{
@@ -753,6 +753,7 @@ process_syncing_sequences_for_apply()
 											   now, wal_retrieve_retry_interval))
 				{
 					MyLogicalRepWorker->sequencesync_failure_time = 0;
+
 					logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
 											 MyLogicalRepWorker->dbid,
 											 MySubscription->oid,
@@ -1689,16 +1690,14 @@ copy_table_done:
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Copy tables that are not ready into table_states_not_ready and sequences
- * that are not ready into sequence_states_not_ready. The pg_subscription_rel
- * table is shared between sequences and tables. Because changes to either
- * sequences or relations can affect the validity of relation states, we update
- * both table_states_not_ready and sequence_states_not_ready simultaneously
- * to ensure consistency, rather than updating them separately. Returns true if
- * subscription has 1 or more tables, else false.
+ * Copy tables that are not READY state into table_states_not_ready, and sequences
+ * that have INIT state into sequence_states_not_ready. The pg_subscription_rel
+ * catalog is shared by tables and sequences. Changes to either sequences or
+ * tables can affect the validity of relation states, so we update both
+ * table_states_not_ready and sequence_states_not_ready simultaneously
+ * to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 static bool
 FetchTableStates(void)
@@ -1728,7 +1727,7 @@ FetchTableStates(void)
 		}
 
 		/*
-		 * Fetch the tables that are in non-ready state and the sequences that
+		 * Fetch tables that are in non-ready state, and sequences that
 		 * are in init state.
 		 */
 		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 71d8c76..b81f496 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -62,7 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
-extern void do_setval(Oid relid, int64 next, bool iscalled, int64 logcnt);
+extern void do_setval(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 8a12ecb..6b201d6 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -242,7 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
-												LogicalRepWorkerType type,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
#97shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#29)
Re: Logical Replication of sequences

On Mon, Jun 10, 2024 at 5:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 12:24, Amul Sul <sulamul@gmail.com> wrote:

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:
[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm missing
something.

We'll require the lsn because the sequence LSN informs the user that
it has been synchronized up to the LSN in pg_subscription_seq. Since
we are not supporting incremental sync, the user will be able to
identify if he should run refresh sequences or not by checking the lsn
of the pg_subscription_seq and the lsn of the sequence(using
pg_sequence_state added) in the publisher.

How the user will know from seq's lsn that he needs to run refresh.
lsn indicates page_lsn and thus the sequence might advance on pub
without changing lsn and thus lsn may look the same on subscriber even
though a sequence-refresh is needed. Am I missing something here?

thanks
Shveta

#98vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#77)
2 attachment(s)
Re: Logical Replication of sequences

On Sat, 20 Jul 2024 at 20:48, vignesh C <vignesh21@gmail.com> wrote:

On Fri, 12 Jul 2024 at 08:22, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh. Here are the rest of my comments for patch v20240705-0003.
======

8. logicalrep_sequence_sync_worker_find

+/*
+ * Walks the workers array and searches for one that matches given
+ * subscription id.
+ *
+ * We are only interested in the sequence sync worker.
+ */
+LogicalRepWorker *
+logicalrep_sequence_sync_worker_find(Oid subid, bool only_running)

There are other similar functions for walking the workers array to
search for a worker. Instead of having different functions for
different cases, wouldn't it be cleaner to combine these into a single
function, where you pass a parameter (e.g. a mask of worker types that
you are interested in finding)?

This is fixed in the v20240730_2 version attached at [1]/messages/by-id/CALDaNm3+XzHAbgyn8gmbBLK5goyv_uyGgHEsTQxRZ8bVk6nAEg@mail.gmail.com.

17.
Also, where does the number 100 come from? Why not 1000? Why not 10?
Why have batching at all? Maybe there should be some comment to
describe the reason and the chosen value.

I had run some tests with 10/100 and 1000 sequences per batch for
10000 sequences. The results for it:
10 per batch - 4.94 seconds
100 per batch - 4.87 seconds
1000 per batch - 4.53 seconds

There is not much time difference between each of them. Currently, it
is set to 100, which seems fine since it will not generate a lot of
transactions. Additionally, the locks on the sequences will be
periodically released during the commit transaction.

I had used the test from the attached patch by changing
max_sequences_sync_per_batch to 10/100/100 in 035_sequences.pl to
verify this.

[1]: /messages/by-id/CALDaNm3+XzHAbgyn8gmbBLK5goyv_uyGgHEsTQxRZ8bVk6nAEg@mail.gmail.com

Regards,
Vignesh

Attachments:

sequence_perf.txttext/plain; charset=US-ASCII; name=sequence_perf.txtDownload
0001-Performance-testing-changes.patchapplication/x-patch; name=0001-Performance-testing-changes.patchDownload
From d060bd7903d812904816f0ada004420e2d9f0c21 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 30 Jul 2024 11:34:55 +0530
Subject: [PATCH] Performance testing changes.

Performance testing changes.
---
 src/backend/replication/logical/launcher.c     |   1 +
 src/backend/replication/logical/sequencesync.c |   6 +-
 src/backend/utils/misc/guc_tables.c            |  12 +++
 src/backend/utils/misc/postgresql.conf.sample  |   1 +
 src/include/replication/logicallauncher.h      |   2 +
 src/test/subscription/t/035_sequences.pl       | 142 +++++++++++++++++++++++++
 6 files changed, 162 insertions(+), 2 deletions(-)
 create mode 100644 src/test/subscription/t/035_sequences.pl

diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 04d76e7..2ece21f 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -50,6 +50,7 @@
 int			max_logical_replication_workers = 4;
 int			max_sync_workers_per_subscription = 2;
 int			max_parallel_apply_workers_per_subscription = 2;
+int			max_sequences_sync_per_batch = 10;
 
 LogicalRepWorker *MyLogicalRepWorker = NULL;
 
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index fc36bf9..6bcfbdf 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -20,6 +20,7 @@
 #include "catalog/pg_subscription_rel.h"
 #include "commands/sequence.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/logicalworker.h"
 #include "replication/worker_internal.h"
 #include "utils/acl.h"
@@ -29,6 +30,7 @@
 #include "utils/rls.h"
 #include "utils/usercontext.h"
 
+
 /*
  * fetch_remote_sequence_data
  *
@@ -287,11 +289,11 @@ LogicalRepSyncSequences(void)
 		 * Have we reached the end of the current batch of sequences,
 		 * or last remaining sequences to synchronize?
 		 */
-		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+		if (((curr_seq % max_sequences_sync_per_batch) == 0) ||
 			curr_seq == seq_count)
 		{
 			/* Obtain the starting index of the current batch. */
-			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+			int			i = (curr_seq - 1) - ((curr_seq - 1) % max_sequences_sync_per_batch);
 
 			/* LOG all the sequences synchronized during current batch. */
 			for (; i < curr_seq; i++)
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 6a623f5..017fb59 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3185,6 +3185,18 @@ struct config_int ConfigureNamesInt[] =
 	},
 
 	{
+		{"max_sequences_sync_per_batch",
+			PGC_SIGHUP,
+			REPLICATION_SUBSCRIBERS,
+			gettext_noop("Maximum number of sequences to be synchronized in one batch."),
+			NULL,
+		},
+		&max_sequences_sync_per_batch,
+		10, 0, 10000,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"max_parallel_apply_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 9ec9f97..82a0713 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -383,6 +383,7 @@
 					# (change requires restart)
 #max_sync_workers_per_subscription = 2	# taken from max_logical_replication_workers
 #max_parallel_apply_workers_per_subscription = 2	# taken from max_logical_replication_workers
+#max_sequences_sync_per_batch = 10
 
 
 #------------------------------------------------------------------------------
diff --git a/src/include/replication/logicallauncher.h b/src/include/replication/logicallauncher.h
index ff0438b..673f114 100644
--- a/src/include/replication/logicallauncher.h
+++ b/src/include/replication/logicallauncher.h
@@ -15,6 +15,8 @@
 extern PGDLLIMPORT int max_logical_replication_workers;
 extern PGDLLIMPORT int max_sync_workers_per_subscription;
 extern PGDLLIMPORT int max_parallel_apply_workers_per_subscription;
+extern PGDLLIMPORT int max_sequences_sync_per_batch;
+
 
 extern void ApplyLauncherRegister(void);
 extern void ApplyLauncherMain(Datum main_arg);
diff --git a/src/test/subscription/t/035_sequences.pl b/src/test/subscription/t/035_sequences.pl
new file mode 100644
index 0000000..e5ac670
--- /dev/null
+++ b/src/test/subscription/t/035_sequences.pl
@@ -0,0 +1,142 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $sequence_count = 10000;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h
+shared_buffers = 40GB
+max_worker_processes = 32
+max_parallel_maintenance_workers = 24
+max_parallel_workers = 32
+synchronous_commit = off
+checkpoint_timeout = 1d
+max_wal_size = 24GB
+min_wal_size = 15GB
+max_locks_per_transaction = 11000
+autovacuum = off'
+);
+
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->append_conf('postgresql.conf', 'max_sequences_sync_per_batch = 100
+shared_buffers = 40GB
+max_worker_processes = 32
+max_parallel_maintenance_workers = 24
+max_parallel_workers = 32
+synchronous_commit = off
+checkpoint_timeout = 1d
+max_wal_size = 24GB
+min_wal_size = 15GB
+max_locks_per_transaction = 11000
+autovacuum = off');
+
+$node_subscriber->start;
+
+
+for (my $i = 0; $i < $sequence_count; $i++)
+{
+	$node_publisher->safe_psql('postgres', "CREATE SEQUENCE regress_s$i;");
+	$node_publisher->safe_psql('postgres', "SELECT nextval('regress_s$i') FROM generate_series(1,100);");
+	$node_subscriber->safe_psql('postgres', "CREATE SEQUENCE regress_s$i;");
+}
+
+my $result = 1;
+is($result, '1', "test started");
+use Time::HiRes qw( time );
+my $start = time();
+
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+
+my $end = time();
+my $runtime = sprintf("%.16s", $end - $start);
+print "test execution time $runtime\n";
+
+$start = time();
+
+$node_subscriber->safe_psql('postgres',
+        "ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+$synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+
+$end = time();
+$runtime = sprintf("%.16s", $end - $start);
+print "test execution time $runtime\n";
+
+$start = time();
+
+$node_subscriber->safe_psql('postgres',
+        "ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+$synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+
+$end = time();
+$runtime = sprintf("%.16s", $end - $start);
+print "test execution time $runtime\n";
+
+$start = time();
+
+$node_subscriber->safe_psql('postgres',
+        "ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+$synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$end = time();
+$runtime = sprintf("%.16s", $end - $start);
+print "test execution time $runtime\n";
+
+$start = time();
+$node_subscriber->safe_psql('postgres',
+        "ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+$synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$end = time();
+$runtime = sprintf("%.16s", $end - $start);
+print "test execution time $runtime\n";
+
+done_testing();
-- 
1.8.3.1

#99Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#94)
Re: Logical Replication of sequences

Hi Vignesh,

I have a question about the subscriber-side behaviour of currval().

======

AFAIK it is normal for currval() to give error is nextval() has not
yet been called [1]https://www.postgresql.org/docs/current/functions-sequence.html

For example.
test_pub=# create sequence s1;
CREATE SEQUENCE
test_pub=# select * from currval('s1');
2024-08-01 07:42:48.619 AEST [24131] ERROR: currval of sequence "s1"
is not yet defined in this session
2024-08-01 07:42:48.619 AEST [24131] STATEMENT: select * from currval('s1');
ERROR: currval of sequence "s1" is not yet defined in this session
test_pub=# select * from nextval('s1');
nextval
---------
1
(1 row)

test_pub=# select * from currval('s1');
currval
---------
1
(1 row)

test_pub=#

~~~

OTOH, I was hoping to be able to use currval() at the subscriber=side
to see the current sequence value after issuing ALTER .. REFRESH
PUBLICATION SEQUENCES.

Unfortunately, it has the same behaviour where currval() cannot be
used without nextval(). But, on the subscriber, you probably never
want to do an explicit nextval() independently of the publisher.

Is this currently a bug, or maybe a quirk that should be documented?

For example:

Publisher
==========

test_pub=# create sequence s1;
CREATE SEQUENCE
test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
CREATE PUBLICATION
test_pub=# select * from nextval('s1');
nextval
---------
1
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
2
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
3
(1 row)

test_pub=#

Subscriber
==========

(Notice currval() always gives an error unless nextval() is used prior).

test_sub=# create sequence s1;
CREATE SEQUENCE
test_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'
PUBLICATION pub1;
2024-08-01 07:51:06.955 AEST [24325] WARNING: subscriptions created
by regression test cases should have names starting with "regress_"
WARNING: subscriptions created by regression test cases should have
names starting with "regress_"
NOTICE: created replication slot "sub1" on publisher
CREATE SUBSCRIPTION
test_sub=# 2024-08-01 07:51:07.023 AEST [4211] LOG: logical
replication apply worker for subscription "sub1" has started
2024-08-01 07:51:07.037 AEST [4213] LOG: logical replication sequence
synchronization worker for subscription "sub1" has started
2024-08-01 07:51:07.063 AEST [4213] LOG: logical replication
synchronization for subscription "sub1", sequence "s1" has finished
2024-08-01 07:51:07.063 AEST [4213] LOG: logical replication sequence
synchronization worker for subscription "sub1" has finished

test_sub=# SELECT * FROM currval('s1');
2024-08-01 07:51:19.688 AEST [24325] ERROR: currval of sequence "s1"
is not yet defined in this session
2024-08-01 07:51:19.688 AEST [24325] STATEMENT: SELECT * FROM currval('s1');
ERROR: currval of sequence "s1" is not yet defined in this session
test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
ALTER SUBSCRIPTION
test_sub=# 2024-08-01 07:51:35.298 AEST [4993] LOG: logical
replication sequence synchronization worker for subscription "sub1"
has started

test_sub=# 2024-08-01 07:51:35.321 AEST [4993] LOG: logical
replication synchronization for subscription "sub1", sequence "s1" has
finished
2024-08-01 07:51:35.321 AEST [4993] LOG: logical replication sequence
synchronization worker for subscription "sub1" has finished

test_sub=#
test_sub=# SELECT * FROM currval('s1');
2024-08-01 07:51:41.438 AEST [24325] ERROR: currval of sequence "s1"
is not yet defined in this session
2024-08-01 07:51:41.438 AEST [24325] STATEMENT: SELECT * FROM currval('s1');
ERROR: currval of sequence "s1" is not yet defined in this session
test_sub=#
test_sub=# SELECT * FROM nextval('s1');
nextval
---------
4
(1 row)

test_sub=# SELECT * FROM currval('s1');
currval
---------
4
(1 row)

test_sub=#

======
[1]: https://www.postgresql.org/docs/current/functions-sequence.html

Kind Regards,
Peter Smith.
Fujitsu Australia.

#100Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#99)
Re: Logical Replication of sequences

Hi Vignesh,

I noticed that when replicating sequences (using the latest patches
0730_2*) the subscriber-side checks the *existence* of the sequence,
but apparently it is not checking other sequence attributes.

For example, consider:

Publisher: "CREATE SEQUENCE s1 START 1 INCREMENT 2;" should be a
sequence of only odd numbers.
Subscriber: "CREATE SEQUENCE s1 START 2 INCREMENT 2;" should be a
sequence of only even numbers.

Because the names match, currently the patch allows replication of the
s1 sequence. I think that might lead to unexpected results on the
subscriber. IMO it might be safer to report ERROR unless the sequences
match properly (i.e. not just a name check).

Below is a demonstration the problem:

==========
Publisher:
==========

(publisher sequence is odd numbers)

test_pub=# create sequence s1 start 1 increment 2;
CREATE SEQUENCE
test_pub=# select * from nextval('s1');
nextval
---------
1
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
3
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
5
(1 row)

test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
CREATE PUBLICATION
test_pub=#

==========
Subscriber:
==========

(subscriber sequence is even numbers)

test_sub=# create sequence s1 start 2 increment 2;
CREATE SEQUENCE
test_sub=# SELECT * FROM nextval('s1');
nextval
---------
2
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
4
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
6
(1 row)

test_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'
PUBLICATION pub1;
2024-08-01 08:43:04.198 AEST [24325] WARNING: subscriptions created
by regression test cases should have names starting with "regress_"
WARNING: subscriptions created by regression test cases should have
names starting with "regress_"
NOTICE: created replication slot "sub1" on publisher
CREATE SUBSCRIPTION
test_sub=# 2024-08-01 08:43:04.294 AEST [26240] LOG: logical
replication apply worker for subscription "sub1" has started
2024-08-01 08:43:04.309 AEST [26244] LOG: logical replication
sequence synchronization worker for subscription "sub1" has started
2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication
synchronization for subscription "sub1", sequence "s1" has finished
2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication
sequence synchronization worker for subscription "sub1" has finished

(after the CREATE SUBSCRIPTION we are getting replicated odd values
from the publisher, even though the subscriber side sequence was
supposed to be even numbers)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
7
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
9
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
11
(1 row)

(Looking at the description you would expect odd values for this
sequence to be impossible)

test_sub=# \dS+ s1
Sequence "public.s1"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 2 | 1 | 9223372036854775807 | 2 | no | 1

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#101shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#1)
Re: Logical Replication of sequences

On Thu, Aug 1, 2024 at 9:26 AM shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jul 29, 2024 at 4:17 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for reporting this, these issues are fixed in the attached
v20240730_2 version patch.

I was reviewing the design of patch003, and I have a query. Do we need
to even start an apply worker and create replication slot when
subscription created is for 'sequences only'? IIUC, currently logical
replication apply worker is the one launching sequence-sync worker
whenever needed. I think it should be the launcher doing this job and
thus apply worker may even not be needed for current functionality of
sequence sync? Going forward when we implement incremental sync of
sequences, then we may have apply worker started but now it is not
needed.

thanks
Shveta

#102shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#101)
Re: Logical Replication of sequences

On Fri, Aug 2, 2024 at 2:24 PM shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Aug 1, 2024 at 9:26 AM shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jul 29, 2024 at 4:17 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for reporting this, these issues are fixed in the attached
v20240730_2 version patch.

I was reviewing the design of patch003, and I have a query. Do we need
to even start an apply worker and create replication slot when
subscription created is for 'sequences only'? IIUC, currently logical
replication apply worker is the one launching sequence-sync worker
whenever needed. I think it should be the launcher doing this job and
thus apply worker may even not be needed for current functionality of
sequence sync? Going forward when we implement incremental sync of
sequences, then we may have apply worker started but now it is not
needed.

Also, can we please mention the state change and 'who does what' atop
sequencesync.c file similar to what we have atop tablesync.c file
otherwise it is difficult to figure out the flow.

thanks
Shveta

#103vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#96)
3 attachment(s)
Re: Logical Replication of sequences

On Wed, 31 Jul 2024 at 12:56, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Here are my review comments for your latest 0730_2* patches.

Patch v20240730_2-0001 looks good to me.

Patch v20240730_2-0002 looks good to me.

My comments for the v20240730_2-0003 patch are below:
~~~

4. AlterSubscription_refresh

My first impression (from the function comment) is that these function
parameters are a bit awkward. For example,
- It says: If 'copy_data' parameter is true, the function will set
the state to "init"; otherwise, it will set the state to "ready".
- It also says: "If 'all_relations' is true, mark all objects with
"init" state..."
Those statements seem to clash. e.g. if copy_data is false but
all_relations is true, then what (???)

all_relations will be true only for "ALTER SUBSCRIPTION ... REFRESH
PUBLICATION SEQUENCES". With option is not supported along with this
command so copy_data with false option is not possible here. Added an
assert for this.

8.
+ *lsn = DatumGetInt64(slot_getattr(slot, 4, &isnull));
+ Assert(!isnull);

Should that be DatumGetUInt64?

It should be DatumGetLSN here.

The rest of the comments are fixed. The attached v20240805 version
patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20240805-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240805-0003-Enhance-sequence-synchronization-during-su.patchDownload
From 028b388ba4d9ac6c63232804479e4fc377f8b0ce Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 29 Jul 2024 12:37:53 +0530
Subject: [PATCH v20240805 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/catalogs.sgml                    |  19 +-
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  50 ++-
 doc/src/sgml/ref/create_subscription.sgml     |   8 +
 doc/src/sgml/system-views.sgml                |  67 ++++
 src/backend/catalog/pg_publication.c          |  46 +++
 src/backend/catalog/pg_subscription.c         |  81 +++-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  28 +-
 src/backend/commands/subscriptioncmds.c       | 315 ++++++++++++---
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/launcher.c    |  71 +++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 372 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 234 ++++++++---
 src/backend/replication/logical/worker.c      |  74 +++-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  31 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 178 +++++++++
 32 files changed, 1449 insertions(+), 201 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..16c427ec3a 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running either
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+  <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a65839a670..3c550ae975 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..1d3318a4f4 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1999,8 +1999,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..eb7d544116 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,45 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -194,12 +211,35 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           parameter of <command>CREATE SUBSCRIPTION</command> for details about
           copying pre-existing data in binary format.
          </para>
+         <para>
+          See the <link linkend="sql-createsubscription-params-with-copy-data"><literal>copy_data</literal></link>
+          on how to handle the warnings regarding the difference in sequence
+          definition between the publisher and the subscriber.
+         </para>
         </listitem>
        </varlistentry>
       </variablelist></para>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. The
+      sequence definition can differ between the publisher and the subscriber,
+      this is detected and a WARNING is logged to the user, but the warning is
+      only an indication of a potential problem; it is recommended to alter the
+      sequence to keep the sequence option same as the publisher and execute
+      the command again.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..de3bdb8ed1 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,14 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          The sequence definition can differ between the publisher and the
+          subscriber, this is detected and a WARNING is logged to the user, but
+          the warning is only an indication of a potential problem; it is
+          recommended to alter the sequence to keep the sequence option same as
+          the publisher and execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a0b692bf1e..8373ebc5b0 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2185,6 +2190,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..8a2161e259 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,7 +460,7 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
@@ -468,7 +471,8 @@ HasSubscriptionRelations(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,35 +511,41 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached 'READY' state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in 'init' state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
 	HeapTuple	tup;
 	int			nkeys = 0;
-	ScanKeyData skey[2];
+	ScanKeyData skey;
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
-	ScanKeyInit(&skey[nkeys++],
+	ScanKeyInit(&skey,
 				Anum_pg_subscription_rel_srsubid,
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
-		ScanKeyInit(&skey[nkeys++],
-					Anum_pg_subscription_rel_srsubstate,
-					BTEqualStrategyNumber, F_CHARNE,
-					CharGetDatum(SUBREL_STATE_READY));
-
 	scan = systable_beginscan(rel, InvalidOid, false,
-							  NULL, nkeys, skey);
+							  NULL, nkeys, &skey);
 
 	while (HeapTupleIsValid(tup = systable_getnext(scan)))
 	{
@@ -532,6 +556,27 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE)
+		{
+			/* Skip sequences if they were not requested */
+			if (!get_sequences)
+				continue;
+
+			/* Skip all non-init sequences if not all_states were requested */
+			if (!all_states && (subrel->srsubstate != SUBREL_STATE_INIT))
+				continue;
+		}
+		else
+		{
+			/* Skip tables if they were not requested */
+			if (!get_tables)
+				continue;
+
+			/* Skip all ready tables if not all_states were requested */
+			if (!all_states && (subrel->srsubstate == SUBREL_STATE_READY))
+				continue;
+		}
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 80bfea5cd5..d2bd796564 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1825,7 +1827,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..8c7796d052 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -734,9 +736,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +750,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List	   *tables;
+			List	   *sequences;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -766,9 +769,24 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * info.
 			 */
 			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			foreach_ptr(RangeVar, rv, tables)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										 rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -847,12 +865,35 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * If 'copy_data' parameter is true, the function will set the state
+ * to "init"; otherwise, it will set the state to "ready".
+ *
+ * When 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * If 'resync_all_sequences' is true, mark all objects with "init" state
+ * for re-synchronization; otherwise, only update the newly added tables and
+ * sequences based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +911,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+	/* resync_all_sequences cannot be specified with refresh_tables */
+	Assert(!(resync_all_sequences && refresh_tables));
+
+	/* resync_all_sequences cannot be specified with copy_data as false */
+	Assert(!(resync_all_sequences && !copy_data));
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +936,16 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +964,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +989,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,7 +1006,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 rv->schemaname, rv->relname, sub->name)));
 			}
 		}
@@ -968,11 +1024,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating tha
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(LOG,
+							(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											 get_namespace_name(get_rel_namespace(relid)),
+											 get_rel_name(relid),
+											 sub->name)));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,38 +1070,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
@@ -1039,6 +1125,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1514,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1570,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1589,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1645,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1887,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1914,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2272,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2450,105 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[8] = {TEXTOID, TEXTOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd, "SELECT DISTINCT n.nspname, c.relname, s.seqtypid, s.seqmin, s.seqmax, s.seqstart, s.seqincrement, s.seqcycle"
+						   " FROM pg_publication p, LATERAL pg_get_publication_sequences(p.pubname::text) gps(relid),"
+						   " pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace JOIN pg_sequence s ON c.oid = s.seqrelid"
+						   " WHERE c.oid = gps.relid AND p.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 8, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		Oid			seqtypid;
+		int64		seqmin;
+		int64		seqmax;
+		int64		seqstart;
+		int64		seqincrement;
+		bool		seqcycle;
+		bool		isnull;
+		RangeVar   *rv;
+		Oid			relid;
+		HeapTuple	tup;
+		Form_pg_sequence seqform;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+		seqtypid = DatumGetObjectId(slot_getattr(slot, 3, &isnull));
+		Assert(!isnull);
+		seqmin = DatumGetInt64(slot_getattr(slot, 4, &isnull));
+		Assert(!isnull);
+		seqmax = DatumGetInt64(slot_getattr(slot, 5, &isnull));
+		Assert(!isnull);
+		seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+		Assert(!isnull);
+		seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+		Assert(!isnull);
+		seqcycle = DatumGetBool(slot_getattr(slot, 8, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+		/* Get the local sequence */
+		tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(tup))
+			elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+				 get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid));
+
+		seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+		if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||
+			seqform->seqmax != seqmax || seqform->seqstart != seqstart ||
+			seqform->seqincrement != seqincrement ||
+			seqform->seqcycle != seqcycle)
+			ereport(WARNING,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("Sequence option in remote and local is not same for \"%s.%s\"",
+						   get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid)),
+					errhint("Alter/Re-create the sequence using the same options as in remote."));
+
+		ReleaseSysCache(tup);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9b3cad1cac..28b772df32 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10835,11 +10835,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..f8dd93a83a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..cbe7c814ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..36c9952994 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,8 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if ((isTableSyncWorker(w) || isSequenceSyncWorker(w)) &&
+			w->subid == subid)
 			res++;
 	}
 
@@ -1178,7 +1210,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1346,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1389,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..c5c347370d
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,372 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieve sequence data (last_value, log_cnt, page_lsn and is_called)
+ * from the remote node.
+ *
+ * The sequence last_value will be returned directly, while
+ * log_cnt, is_called and page_lsn will be returned via the output
+ * parameters log_cnt, is_called and lsn, respectively.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
+						   char *relname, int64 *log_cnt, bool *is_called,
+						   XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[4] = {INT8OID, INT8OID, BOOLOID, LSNOID};
+	int64		last_value = (Datum) 0;
+	bool		isnull;
+
+	initStringInfo(&cmd);
+
+	appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called, page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 4, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return last_value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	seq_last_value = fetch_remote_sequence_data(conn, remoteid, nspname,
+												relname, &seq_log_cnt, &seq_is_called,
+												&seq_lsn);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..96ff253ab0 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -130,19 +130,22 @@ typedef enum
 	SYNC_TABLE_STATE_VALID,
 } SyncingTablesState;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+static SyncingTablesState relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+static List *sequence_states_not_ready = NIL;
+static bool FetchTableStates(void);
 
 static StringInfo copybuf = NULL;
 
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -157,15 +160,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -205,7 +217,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -252,7 +264,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -278,9 +290,9 @@ wait_for_worker_state_change(char expected_state)
  * Callback from syscache invalidation.
  */
 void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 }
 
 /*
@@ -387,7 +399,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -429,9 +441,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -464,6 +473,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -477,11 +494,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -518,7 +530,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -661,10 +674,108 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sync worker for the
+ * same relation after waiting at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequence sync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync worker and/or sequencesync worker for the newly added
+ * relations.
  */
 void
-process_syncing_tables(XLogRecPtr current_lsn)
+process_syncing_relations(XLogRecPtr current_lsn)
 {
 	switch (MyLogicalRepWorker->type)
 	{
@@ -682,7 +793,20 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchTableStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -1320,7 +1444,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1564,39 +1688,48 @@ copy_table_done:
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * Copy tables that are not READY state into table_states_not_ready, and sequences
+ * that have INIT state into sequence_states_not_ready. The pg_subscription_rel
+ * catalog is shared by tables and sequences. Changes to either sequences or
+ * tables can affect the validity of relation states, so we update both
+ * table_states_not_ready and sequence_states_not_ready simultaneously
+ * to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 static bool
-FetchTableStates(bool *started_tx)
+FetchTableStates(void)
 {
 	static bool has_subrels = false;
+	bool		started_tx = false;
 
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_TABLE_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/*
+		 * Fetch tables that are in non-ready state, and sequences that are in
+		 * init state.
+		 */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -1604,7 +1737,11 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -1625,8 +1762,14 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
 	}
 
 	return has_subrels;
@@ -1709,7 +1852,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1717,7 +1860,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1731,17 +1874,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchTableStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6dc54c7283..2e84b24617 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,8 +1030,11 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1137,8 +1145,11 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1193,8 +1204,11 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1254,8 +1268,11 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1381,8 +1398,11 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2223,8 +2243,11 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3621,8 +3644,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
+			process_syncing_relations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4531,8 +4557,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4611,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4619,14 +4649,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4656,8 +4689,11 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 7997b841cb..899f0299b8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12007,6 +12007,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..58abed907a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_relations);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6dff23fe6f 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,22 +250,27 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
+
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void process_syncing_relations(XLogRecPtr current_lsn);
+extern void invalidate_syncing_relation_states(Datum arg, int cacheid,
+											   uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -326,15 +335,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 5201280669..358c76e78e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1443,6 +1443,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..88f2705abe
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,178 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are syned
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+($result, my $stdout, my $stderr) = $node_subscriber->psql(
+	'postgres', "
+        ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES");
+like(
+	$stderr,
+	qr/WARNING: ( [A-Z0-9]+:)? Sequence option in remote and local is not same for "public.regress_s4"/,
+	"Refresh publication sequences should throw a warning if the sequence definition is not the same"
+);
+
+
+done_testing();
-- 
2.34.1

v20240805-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240805-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 57285fdbdd29dec9182efab61f27b8e351b92c67 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240805 1/3] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 27 ++++++++
 src/backend/commands/sequence.c        | 92 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 8 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 0f7154b76a..5325af7773 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19582,6 +19582,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ()
+        <returnvalue>record</returnvalue>
+        ( <parameter>lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>lsn</literal> is the
+        page LSN of the sequence, <literal>last_value</literal> is the most
+        recent value returned by <function>nextval</function> in the current
+        session, <literal>log_cnt</literal> shows how many fetches remain
+        before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..80bfea5cd5 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d36f6001bb..7997b841cb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240805-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240805-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From dabe64410982b7384c2a6e2492f728e8548411ef Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240805 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  37 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 697 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..783874fb75 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,13 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..9b3cad1cac 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10561,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10586,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10704,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19439,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 79190470f7..98efa973f0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4252,6 +4252,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4268,23 +4269,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4296,6 +4303,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4315,6 +4323,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4362,8 +4372,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6e6b7c2711..5e2b1bbd1a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

#104vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#1)
Re: Logical Replication of sequences

On Thu, 1 Aug 2024 at 03:33, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

I have a question about the subscriber-side behaviour of currval().

======

AFAIK it is normal for currval() to give error is nextval() has not
yet been called [1]

For example.
test_pub=# create sequence s1;
CREATE SEQUENCE
test_pub=# select * from currval('s1');
2024-08-01 07:42:48.619 AEST [24131] ERROR: currval of sequence "s1"
is not yet defined in this session
2024-08-01 07:42:48.619 AEST [24131] STATEMENT: select * from currval('s1');
ERROR: currval of sequence "s1" is not yet defined in this session
test_pub=# select * from nextval('s1');
nextval
---------
1
(1 row)

test_pub=# select * from currval('s1');
currval
---------
1
(1 row)

test_pub=#

~~~

OTOH, I was hoping to be able to use currval() at the subscriber=side
to see the current sequence value after issuing ALTER .. REFRESH
PUBLICATION SEQUENCES.

Unfortunately, it has the same behaviour where currval() cannot be
used without nextval(). But, on the subscriber, you probably never
want to do an explicit nextval() independently of the publisher.

Is this currently a bug, or maybe a quirk that should be documented?

The currval returns the most recent value obtained from the nextval
function for a given sequence within the current session. This
function is specific to the session, meaning it only provides the last
sequence value retrieved during that session. However, if you call
currval before using nextval in the same session, you'll encounter an
error stating "currval of the sequence is not yet defined in this
session." Meaning even in the publisher this value is only visible in
the current session and not in a different session. Alternatively you
can use the following to get the last_value of the sequence: SELECT
last_value FROM sequence_name. I feel this need not be documented as
the similar issue is present in the publisher and there is an "SELECT
last_value FROM sequence_name" to get the last_value.

Regards,
Vignesh

#105vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#100)
Re: Logical Replication of sequences

On Thu, 1 Aug 2024 at 04:25, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

I noticed that when replicating sequences (using the latest patches
0730_2*) the subscriber-side checks the *existence* of the sequence,
but apparently it is not checking other sequence attributes.

For example, consider:

Publisher: "CREATE SEQUENCE s1 START 1 INCREMENT 2;" should be a
sequence of only odd numbers.
Subscriber: "CREATE SEQUENCE s1 START 2 INCREMENT 2;" should be a
sequence of only even numbers.

Because the names match, currently the patch allows replication of the
s1 sequence. I think that might lead to unexpected results on the
subscriber. IMO it might be safer to report ERROR unless the sequences
match properly (i.e. not just a name check).

Below is a demonstration the problem:

==========
Publisher:
==========

(publisher sequence is odd numbers)

test_pub=# create sequence s1 start 1 increment 2;
CREATE SEQUENCE
test_pub=# select * from nextval('s1');
nextval
---------
1
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
3
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
5
(1 row)

test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
CREATE PUBLICATION
test_pub=#

==========
Subscriber:
==========

(subscriber sequence is even numbers)

test_sub=# create sequence s1 start 2 increment 2;
CREATE SEQUENCE
test_sub=# SELECT * FROM nextval('s1');
nextval
---------
2
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
4
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
6
(1 row)

test_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'
PUBLICATION pub1;
2024-08-01 08:43:04.198 AEST [24325] WARNING: subscriptions created
by regression test cases should have names starting with "regress_"
WARNING: subscriptions created by regression test cases should have
names starting with "regress_"
NOTICE: created replication slot "sub1" on publisher
CREATE SUBSCRIPTION
test_sub=# 2024-08-01 08:43:04.294 AEST [26240] LOG: logical
replication apply worker for subscription "sub1" has started
2024-08-01 08:43:04.309 AEST [26244] LOG: logical replication
sequence synchronization worker for subscription "sub1" has started
2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication
synchronization for subscription "sub1", sequence "s1" has finished
2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication
sequence synchronization worker for subscription "sub1" has finished

(after the CREATE SUBSCRIPTION we are getting replicated odd values
from the publisher, even though the subscriber side sequence was
supposed to be even numbers)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
7
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
9
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
11
(1 row)

(Looking at the description you would expect odd values for this
sequence to be impossible)

test_sub=# \dS+ s1
Sequence "public.s1"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 2 | 1 | 9223372036854775807 | 2 | no | 1

Even if we check the sequence definition during the CREATE
SUBSCRIPTION/ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES commands, there's still
a chance that the sequence definition might change after the command
has been executed. Currently, there's no mechanism to lock a sequence,
and we also permit replication of table data even if the table
structures differ, such as mismatched data types like int and
smallint. I have modified it to log a warning to inform users that the
sequence options on the publisher and subscriber are not the same and
advise them to ensure that the sequence definitions are consistent
between both.
The v20240805 version patch attached at [1]/messages/by-id/CALDaNm1Y_ot-jFRfmtwDuwmFrgSSYHjVuy28RspSopTtwzXy8w@mail.gmail.com has the changes for the same.
[1]: /messages/by-id/CALDaNm1Y_ot-jFRfmtwDuwmFrgSSYHjVuy28RspSopTtwzXy8w@mail.gmail.com

Regards,
Vignesh

#106vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#97)
Re: Logical Replication of sequences

On Wed, 31 Jul 2024 at 14:39, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jun 10, 2024 at 5:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 12:24, Amul Sul <sulamul@gmail.com> wrote:

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:
[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm missing
something.

We'll require the lsn because the sequence LSN informs the user that
it has been synchronized up to the LSN in pg_subscription_seq. Since
we are not supporting incremental sync, the user will be able to
identify if he should run refresh sequences or not by checking the lsn
of the pg_subscription_seq and the lsn of the sequence(using
pg_sequence_state added) in the publisher.

How the user will know from seq's lsn that he needs to run refresh.
lsn indicates page_lsn and thus the sequence might advance on pub
without changing lsn and thus lsn may look the same on subscriber even
though a sequence-refresh is needed. Am I missing something here?

When a sequence is synchronized to the subscriber, the page LSN of the
sequence from the publisher is also retrieved and stored in
pg_subscriber_rel as shown below:
--- Publisher page lsn
publisher=# select pg_sequence_state('seq1');
 pg_sequence_state
--------------------
 (0/1510E38,65,1,t)
(1 row)
--- Subscriber stores the publisher's page lsn for the sequence
subscriber=# select * from pg_subscription_rel where srrelid = 16384;
 srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
   16389 |   16384 | r          | 0/1510E38
(1 row)
If changes are made to the sequence, such as performing many nextvals,
the page LSN will be updated. Currently the sequence values are
prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for
the prefetched values, once the prefetched values are consumed the lsn
will get updated.
For example:
--- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)
publisher=# select pg_sequence_state('seq1');
  pg_sequence_state
----------------------
 (0/1558CA8,143,22,t)
(1 row)

The user can then compare this updated value with the sequence's LSN
in pg_subscription_rel to determine when to re-synchronize the
sequence.

Regards,
Vignesh

#107vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#1)
3 attachment(s)
Re: Logical Replication of sequences

On Thu, 1 Aug 2024 at 09:26, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jul 29, 2024 at 4:17 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for reporting this, these issues are fixed in the attached
v20240730_2 version patch.

Thanks for addressing the comments. Please find few comments on patch001 alone:

Potential Bug:
1) 'last_value' returned by pg_sequence_state() is wrong initially.

postgres=# create sequence myseq5;
CREATE SEQUENCE

postgres=# select * from pg_sequence_state('myseq5');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/1579C78 | 1 | 0 | f

postgres=# SELECT nextval('myseq5') ;
nextval
---------
1

postgres=# select * from pg_sequence_state('myseq5');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/1579FD8 | 1 | 32 | t

Both calls returned 1. First call should have returned NULL.

I noticed the same behavior for selecting from a sequence:
postgres=# select * from myseq5;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 0 | f
(1 row)

postgres=# select nextval('myseq5');
nextval
---------
1
(1 row)

postgres=# select * from myseq5;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 32 | t
(1 row)

By default it shows the last_value as the start value for the
sequence. So this looks ok to me.

2)
func.sgml:
a) pg_sequence_state : Don't we need to give input arg as regclass
like we give in nextval,setval etc?
b) Should 'lsn' be changed to 'page_lsn' as returned in output of
pg_sequence_state()

Modified

3)
read_seq_tuple() header says:
* lsn_ret will be set to the page LSN if the caller requested it.
* This allows the caller to determine which sequence changes are
* before/after the returned sequence state.

How, using lsn which is page-lsn and not sequence value/change lsn,
does the user interpret if sequence changes are before/after the
returned sequence state? Can you please elaborate or amend the
comment?

I have added this to pg_sequence_state function header in 003 patch as
the subscriber side changes are present here, I felt that is more apt
to mention. This is also added in sequencesync.c file header.

The attached v20240805_2 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20240805_2-0001-Introduce-pg_sequence_state-function-for.patchtext/x-patch; charset=US-ASCII; name=v20240805_2-0001-Introduce-pg_sequence_state-function-for.patchDownload
From 4b3c66d7ac8d4157f6f97645b704e145c400ccd7 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240805_2 1/3] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 0f7154b76a..ca5be43283 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19582,6 +19582,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d36f6001bb..7997b841cb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240805_2-0002-Introduce-ALL-SEQUENCES-support-for-Post.patchtext/x-patch; charset=US-ASCII; name=v20240805_2-0002-Introduce-ALL-SEQUENCES-support-for-Post.patchDownload
From e173cb8a5a69b7ac1b07e1e38b0f873da5005b75 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240805_2 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  37 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 697 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..783874fb75 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,13 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..9b3cad1cac 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10561,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10586,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10704,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19439,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 79190470f7..98efa973f0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4252,6 +4252,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4268,23 +4269,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4296,6 +4303,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4315,6 +4323,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4362,8 +4372,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6e6b7c2711..5e2b1bbd1a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240805_2-0003-Enhance-sequence-synchronization-during-.patchtext/x-patch; charset=US-ASCII; name=v20240805_2-0003-Enhance-sequence-synchronization-during-.patchDownload
From 9a6060feabf73c9c002c56506e66caadb9b65f77 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 5 Aug 2024 12:18:31 +0530
Subject: [PATCH v20240805_2 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/catalogs.sgml                    |  19 +-
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |   4 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  50 ++-
 doc/src/sgml/ref/create_subscription.sgml     |   8 +
 doc/src/sgml/system-views.sgml                |  67 +++
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  81 +++-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 315 +++++++++++---
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/launcher.c    |  71 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 404 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 234 +++++++---
 src/backend/replication/logical/worker.c      |  74 +++-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  31 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 178 ++++++++
 32 files changed, 1485 insertions(+), 200 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..16c427ec3a 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running either
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+  <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a65839a670..3c550ae975 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..1d3318a4f4 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1999,8 +1999,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..eb7d544116 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,45 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -194,12 +211,35 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           parameter of <command>CREATE SUBSCRIPTION</command> for details about
           copying pre-existing data in binary format.
          </para>
+         <para>
+          See the <link linkend="sql-createsubscription-params-with-copy-data"><literal>copy_data</literal></link>
+          on how to handle the warnings regarding the difference in sequence
+          definition between the publisher and the subscriber.
+         </para>
         </listitem>
        </varlistentry>
       </variablelist></para>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. The
+      sequence definition can differ between the publisher and the subscriber,
+      this is detected and a WARNING is logged to the user, but the warning is
+      only an indication of a potential problem; it is recommended to alter the
+      sequence to keep the sequence option same as the publisher and execute
+      the command again.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..de3bdb8ed1 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,14 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          The sequence definition can differ between the publisher and the
+          subscriber, this is detected and a WARNING is logged to the user, but
+          the warning is only an indication of a potential problem; it is
+          recommended to alter the sequence to keep the sequence option same as
+          the publisher and execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a0b692bf1e..8373ebc5b0 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2185,6 +2190,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..8a2161e259 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,7 +460,7 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
@@ -468,7 +471,8 @@ HasSubscriptionRelations(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,35 +511,41 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached 'READY' state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in 'init' state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
 	HeapTuple	tup;
 	int			nkeys = 0;
-	ScanKeyData skey[2];
+	ScanKeyData skey;
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
-	ScanKeyInit(&skey[nkeys++],
+	ScanKeyInit(&skey,
 				Anum_pg_subscription_rel_srsubid,
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
-		ScanKeyInit(&skey[nkeys++],
-					Anum_pg_subscription_rel_srsubstate,
-					BTEqualStrategyNumber, F_CHARNE,
-					CharGetDatum(SUBREL_STATE_READY));
-
 	scan = systable_beginscan(rel, InvalidOid, false,
-							  NULL, nkeys, skey);
+							  NULL, nkeys, &skey);
 
 	while (HeapTupleIsValid(tup = systable_getnext(scan)))
 	{
@@ -532,6 +556,27 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE)
+		{
+			/* Skip sequences if they were not requested */
+			if (!get_sequences)
+				continue;
+
+			/* Skip all non-init sequences if not all_states were requested */
+			if (!all_states && (subrel->srsubstate != SUBREL_STATE_INIT))
+				continue;
+		}
+		else
+		{
+			/* Skip tables if they were not requested */
+			if (!get_tables)
+				continue;
+
+			/* Skip all ready tables if not all_states were requested */
+			if (!all_states && (subrel->srsubstate == SUBREL_STATE_READY))
+				continue;
+		}
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..ec7d5bbba1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1896,6 +1898,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn allows the user to determine if the sequence has been updated
+ * since the last synchronization with the subscriber. This is done by
+ * comparing the current page_lsn with the value stored in pg_subscription_rel
+ * from the last synchronization.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..8c7796d052 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -734,9 +736,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +750,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			List	   *tables;
+			List	   *sequences;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -766,9 +769,24 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * info.
 			 */
 			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			foreach_ptr(RangeVar, rv, tables)
+			{
+				Oid			relid;
+
+				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+				/* Check for supported relkind. */
+				CheckSubscriptionRelkind(get_rel_relkind(relid),
+										 rv->schemaname, rv->relname);
+
+				AddSubscriptionRelState(subid, relid, table_state,
+										InvalidXLogRecPtr, true);
+			}
+
+			/* Add the sequences in init state */
+			sequences = fetch_sequence_list(wrconn, publications);
+			foreach_ptr(RangeVar, rv, sequences)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -847,12 +865,35 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * If 'copy_data' parameter is true, the function will set the state
+ * to "init"; otherwise, it will set the state to "ready".
+ *
+ * When 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * If 'resync_all_sequences' is true, mark all objects with "init" state
+ * for re-synchronization; otherwise, only update the newly added tables and
+ * sequences based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +911,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+	/* resync_all_sequences cannot be specified with refresh_tables */
+	Assert(!(resync_all_sequences && refresh_tables));
+
+	/* resync_all_sequences cannot be specified with copy_data as false */
+	Assert(!(resync_all_sequences && !copy_data));
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +936,16 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +964,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +989,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,7 +1006,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 rv->schemaname, rv->relname, sub->name)));
 			}
 		}
@@ -968,11 +1024,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating tha
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(LOG,
+							(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											 get_namespace_name(get_rel_namespace(relid)),
+											 get_rel_name(relid),
+											 sub->name)));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,38 +1070,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
@@ -1039,6 +1125,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1514,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1570,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1589,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1645,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1887,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1914,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2272,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2450,105 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[8] = {TEXTOID, TEXTOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd, "SELECT DISTINCT n.nspname, c.relname, s.seqtypid, s.seqmin, s.seqmax, s.seqstart, s.seqincrement, s.seqcycle"
+						   " FROM pg_publication p, LATERAL pg_get_publication_sequences(p.pubname::text) gps(relid),"
+						   " pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace JOIN pg_sequence s ON c.oid = s.seqrelid"
+						   " WHERE c.oid = gps.relid AND p.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 8, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		Oid			seqtypid;
+		int64		seqmin;
+		int64		seqmax;
+		int64		seqstart;
+		int64		seqincrement;
+		bool		seqcycle;
+		bool		isnull;
+		RangeVar   *rv;
+		Oid			relid;
+		HeapTuple	tup;
+		Form_pg_sequence seqform;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+		seqtypid = DatumGetObjectId(slot_getattr(slot, 3, &isnull));
+		Assert(!isnull);
+		seqmin = DatumGetInt64(slot_getattr(slot, 4, &isnull));
+		Assert(!isnull);
+		seqmax = DatumGetInt64(slot_getattr(slot, 5, &isnull));
+		Assert(!isnull);
+		seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+		Assert(!isnull);
+		seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+		Assert(!isnull);
+		seqcycle = DatumGetBool(slot_getattr(slot, 8, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+		/* Get the local sequence */
+		tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(tup))
+			elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+				 get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid));
+
+		seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+		if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||
+			seqform->seqmax != seqmax || seqform->seqstart != seqstart ||
+			seqform->seqincrement != seqincrement ||
+			seqform->seqcycle != seqcycle)
+			ereport(WARNING,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("Sequence option in remote and local is not same for \"%s.%s\"",
+						   get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid)),
+					errhint("Alter/Re-create the sequence using the same options as in remote."));
+
+		ReleaseSysCache(tup);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9b3cad1cac..28b772df32 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10835,11 +10835,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..f8dd93a83a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..cbe7c814ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..36c9952994 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,8 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if ((isTableSyncWorker(w) || isSequenceSyncWorker(w)) &&
+			w->subid == subid)
 			res++;
 	}
 
@@ -1178,7 +1210,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1346,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1389,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..2a5c8c5939
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,404 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCE
+ *
+ * Sequencesync worker will get the sequences that should be synchronized from
+ * pg_subscription_rel catalog table. It synchronizes
+ * MAX_SEQUENCES_SYNC_PER_BATCH (100) sequences within a single transaction by
+ * getting the sequence value from the remote publisher and updating it to the
+ * local subscriber sequence and updates the seqeunce state to READY. It also
+ * updates the remote sequence's lsn to pg_subscription_rel which can be
+ * later used to compare it with the pg_sequence_state page_lsn value to
+ * identify if sequence is changed since the last synchronization.
+ *
+ * The sequencesync worker retrieves the sequences that need to be synchronized
+ * from the pg_subscription_rel catalog table. It synchronizes up to
+ * MAX_SEQUENCES_SYNC_PER_BATCH (100) sequences in a single transaction by
+ * fetching the sequence values and the sequence's page_lsn from the remote
+ * publisher and updating them in the local subscriber sequence. After
+ * synchronization, it sets the sequence state to READY. This LSN can later be
+ * compared with the pg_sequence_state page LSN value to determine if the
+ * sequence has changed since the last synchronization.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * Here MAX_SEQUENCES_SYNC_PER_BATCH (100) sequences are synchronized within a
+ * single transaction  to avoid creating a lot of transactions and also the
+ * locks on the sequence relation will be periodically released during the
+ * commit transaction.
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieve sequence data (last_value, log_cnt, page_lsn and is_called)
+ * from the remote node.
+ *
+ * The sequence last_value will be returned directly, while
+ * log_cnt, is_called and page_lsn will be returned via the output
+ * parameters log_cnt, is_called and lsn, respectively.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
+						   char *relname, int64 *log_cnt, bool *is_called,
+						   XLogRecPtr *lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[4] = {INT8OID, INT8OID, BOOLOID, LSNOID};
+	int64		last_value = (Datum) 0;
+	bool		isnull;
+
+	initStringInfo(&cmd);
+
+	appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called, page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 4, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return last_value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	seq_last_value = fetch_remote_sequence_data(conn, remoteid, nspname,
+												relname, &seq_log_cnt, &seq_is_called,
+												&seq_lsn);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..96ff253ab0 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -130,19 +130,22 @@ typedef enum
 	SYNC_TABLE_STATE_VALID,
 } SyncingTablesState;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+static SyncingTablesState relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+static List *sequence_states_not_ready = NIL;
+static bool FetchTableStates(void);
 
 static StringInfo copybuf = NULL;
 
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -157,15 +160,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -205,7 +217,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -252,7 +264,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -278,9 +290,9 @@ wait_for_worker_state_change(char expected_state)
  * Callback from syscache invalidation.
  */
 void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 }
 
 /*
@@ -387,7 +399,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -429,9 +441,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -464,6 +473,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -477,11 +494,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -518,7 +530,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -661,10 +674,108 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sync worker for the
+ * same relation after waiting at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequence sync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync worker and/or sequencesync worker for the newly added
+ * relations.
  */
 void
-process_syncing_tables(XLogRecPtr current_lsn)
+process_syncing_relations(XLogRecPtr current_lsn)
 {
 	switch (MyLogicalRepWorker->type)
 	{
@@ -682,7 +793,20 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchTableStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -1320,7 +1444,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1564,39 +1688,48 @@ copy_table_done:
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * Copy tables that are not READY state into table_states_not_ready, and sequences
+ * that have INIT state into sequence_states_not_ready. The pg_subscription_rel
+ * catalog is shared by tables and sequences. Changes to either sequences or
+ * tables can affect the validity of relation states, so we update both
+ * table_states_not_ready and sequence_states_not_ready simultaneously
+ * to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 static bool
-FetchTableStates(bool *started_tx)
+FetchTableStates(void)
 {
 	static bool has_subrels = false;
+	bool		started_tx = false;
 
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_TABLE_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/*
+		 * Fetch tables that are in non-ready state, and sequences that are in
+		 * init state.
+		 */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -1604,7 +1737,11 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -1625,8 +1762,14 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
 	}
 
 	return has_subrels;
@@ -1709,7 +1852,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1717,7 +1860,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1731,17 +1874,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchTableStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6dc54c7283..2e84b24617 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,8 +1030,11 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1137,8 +1145,11 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1193,8 +1204,11 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1254,8 +1268,11 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1381,8 +1398,11 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2223,8 +2243,11 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3621,8 +3644,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
+			process_syncing_relations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4531,8 +4557,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4611,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4619,14 +4649,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4656,8 +4689,11 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 7997b841cb..899f0299b8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12007,6 +12007,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..58abed907a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_relations);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6dff23fe6f 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,22 +250,27 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
+
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void process_syncing_relations(XLogRecPtr current_lsn);
+extern void invalidate_syncing_relation_states(Datum arg, int cacheid,
+											   uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -326,15 +335,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 5201280669..358c76e78e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1443,6 +1443,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..88f2705abe
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,178 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are syned
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+($result, my $stdout, my $stderr) = $node_subscriber->psql(
+	'postgres', "
+        ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES");
+like(
+	$stderr,
+	qr/WARNING: ( [A-Z0-9]+:)? Sequence option in remote and local is not same for "public.regress_s4"/,
+	"Refresh publication sequences should throw a warning if the sequence definition is not the same"
+);
+
+
+done_testing();
-- 
2.34.1

#108vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#101)
Re: Logical Replication of sequences

On Fri, 2 Aug 2024 at 14:24, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Aug 1, 2024 at 9:26 AM shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jul 29, 2024 at 4:17 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for reporting this, these issues are fixed in the attached
v20240730_2 version patch.

I was reviewing the design of patch003, and I have a query. Do we need
to even start an apply worker and create replication slot when
subscription created is for 'sequences only'? IIUC, currently logical
replication apply worker is the one launching sequence-sync worker
whenever needed. I think it should be the launcher doing this job and
thus apply worker may even not be needed for current functionality of
sequence sync? Going forward when we implement incremental sync of
sequences, then we may have apply worker started but now it is not
needed.

I believe the current method of having the apply worker initiate the
sequence sync worker is advantageous for several reasons:
a) Reduces Launcher Load: This approach prevents overloading the
launcher, which must handle various other subscription requests.
b) Facilitates Incremental Sync: It provides a more straightforward
path to extend support for incremental sequence synchronization.
c) Reuses Existing Code: It leverages the existing tablesync worker
code for starting the tablesync process, avoiding the need to
duplicate code in the launcher.
d) Simplified Code Maintenance: Centralizing sequence synchronization
logic within the apply worker can simplify code maintenance and
updates, as changes will only need to be made in one place rather than
across multiple components.
e) Better Monitoring and Debugging: With sequence synchronization
being handled by the apply worker, you can more effectively monitor
and debug synchronization processes since all related operations are
managed by a single component.

Also, I noticed that even when a publication has no tables, we create
replication slot and start apply worker.

Regards,
Vignesh

#109vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#102)
Re: Logical Replication of sequences

On Fri, 2 Aug 2024 at 14:33, shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Aug 2, 2024 at 2:24 PM shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Aug 1, 2024 at 9:26 AM shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jul 29, 2024 at 4:17 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for reporting this, these issues are fixed in the attached
v20240730_2 version patch.

I was reviewing the design of patch003, and I have a query. Do we need
to even start an apply worker and create replication slot when
subscription created is for 'sequences only'? IIUC, currently logical
replication apply worker is the one launching sequence-sync worker
whenever needed. I think it should be the launcher doing this job and
thus apply worker may even not be needed for current functionality of
sequence sync? Going forward when we implement incremental sync of
sequences, then we may have apply worker started but now it is not
needed.

Also, can we please mention the state change and 'who does what' atop
sequencesync.c file similar to what we have atop tablesync.c file
otherwise it is difficult to figure out the flow.

I have added this in sequencesync.c file, the changes for the same are
available at v20240805_2 version patch at [1]/messages/by-id/CALDaNm1kk1MHGk3BU_XTxay=dR6sMHnm4TT5cmVz2f_JXkWENQ@mail.gmail.com.
[1]: /messages/by-id/CALDaNm1kk1MHGk3BU_XTxay=dR6sMHnm4TT5cmVz2f_JXkWENQ@mail.gmail.com

Regards,
Vignesh

#110Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#108)
Re: Logical Replication of sequences

On Mon, Aug 5, 2024 at 2:36 PM vignesh C <vignesh21@gmail.com> wrote:

On Fri, 2 Aug 2024 at 14:24, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Aug 1, 2024 at 9:26 AM shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jul 29, 2024 at 4:17 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for reporting this, these issues are fixed in the attached
v20240730_2 version patch.

I was reviewing the design of patch003, and I have a query. Do we need
to even start an apply worker and create replication slot when
subscription created is for 'sequences only'? IIUC, currently logical
replication apply worker is the one launching sequence-sync worker
whenever needed. I think it should be the launcher doing this job and
thus apply worker may even not be needed for current functionality of
sequence sync?

But that would lead to maintaining all sequence-sync of each
subscription by launcher. Say there are 100 sequences per subscription
and some of them from each subscription are failing due to some
reasons then the launcher will be responsible for ensuring all the
sequences are synced. I think it would be better to handle
per-subscription work by the apply worker.

Going forward when we implement incremental sync of

sequences, then we may have apply worker started but now it is not
needed.

I believe the current method of having the apply worker initiate the
sequence sync worker is advantageous for several reasons:
a) Reduces Launcher Load: This approach prevents overloading the
launcher, which must handle various other subscription requests.
b) Facilitates Incremental Sync: It provides a more straightforward
path to extend support for incremental sequence synchronization.
c) Reuses Existing Code: It leverages the existing tablesync worker
code for starting the tablesync process, avoiding the need to
duplicate code in the launcher.
d) Simplified Code Maintenance: Centralizing sequence synchronization
logic within the apply worker can simplify code maintenance and
updates, as changes will only need to be made in one place rather than
across multiple components.
e) Better Monitoring and Debugging: With sequence synchronization
being handled by the apply worker, you can more effectively monitor
and debug synchronization processes since all related operations are
managed by a single component.

Also, I noticed that even when a publication has no tables, we create
replication slot and start apply worker.

As far as I understand slots and origins are primarily required for
incremental sync. Would it be used only for sequence-sync cases? If
not then we can avoid creating those. I agree that it would add some
complexity to the code with sequence-specific checks, so we can create
a top-up patch for this if required and evaluate its complexity versus
the benefit it produces.

--
With Regards,
Amit Kapila.

#111shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#106)
Re: Logical Replication of sequences

On Mon, Aug 5, 2024 at 11:04 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 31 Jul 2024 at 14:39, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jun 10, 2024 at 5:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 12:24, Amul Sul <sulamul@gmail.com> wrote:

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:
[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm missing
something.

We'll require the lsn because the sequence LSN informs the user that
it has been synchronized up to the LSN in pg_subscription_seq. Since
we are not supporting incremental sync, the user will be able to
identify if he should run refresh sequences or not by checking the lsn
of the pg_subscription_seq and the lsn of the sequence(using
pg_sequence_state added) in the publisher.

How the user will know from seq's lsn that he needs to run refresh.
lsn indicates page_lsn and thus the sequence might advance on pub
without changing lsn and thus lsn may look the same on subscriber even
though a sequence-refresh is needed. Am I missing something here?

When a sequence is synchronized to the subscriber, the page LSN of the
sequence from the publisher is also retrieved and stored in
pg_subscriber_rel as shown below:
--- Publisher page lsn
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
--------------------
(0/1510E38,65,1,t)
(1 row)
--- Subscriber stores the publisher's page lsn for the sequence
subscriber=# select * from pg_subscription_rel where srrelid = 16384;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16389 |   16384 | r          | 0/1510E38
(1 row)
If changes are made to the sequence, such as performing many nextvals,
the page LSN will be updated. Currently the sequence values are
prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for
the prefetched values, once the prefetched values are consumed the lsn
will get updated.
For example:
--- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
----------------------
(0/1558CA8,143,22,t)
(1 row)

The user can then compare this updated value with the sequence's LSN
in pg_subscription_rel to determine when to re-synchronize the
sequence.

Thanks for the details. But I was referring to the case where we are
in between pre-fetched values on publisher (say at 25th value), while
on subscriber we are slightly behind (say at 15th value), but page-lsn
will be the same on both. Since the subscriber is behind, a
sequence-refresh is needed on sub, but by looking at lsn (which is
same), one can not say that for sure. Let me know if I have
misunderstood it.

thanks
Shveta

#112shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#110)
Re: Logical Replication of sequences

On Mon, Aug 5, 2024 at 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Aug 5, 2024 at 2:36 PM vignesh C <vignesh21@gmail.com> wrote:

On Fri, 2 Aug 2024 at 14:24, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Aug 1, 2024 at 9:26 AM shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jul 29, 2024 at 4:17 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for reporting this, these issues are fixed in the attached
v20240730_2 version patch.

I was reviewing the design of patch003, and I have a query. Do we need
to even start an apply worker and create replication slot when
subscription created is for 'sequences only'? IIUC, currently logical
replication apply worker is the one launching sequence-sync worker
whenever needed. I think it should be the launcher doing this job and
thus apply worker may even not be needed for current functionality of
sequence sync?

But that would lead to maintaining all sequence-sync of each
subscription by launcher. Say there are 100 sequences per subscription
and some of them from each subscription are failing due to some
reasons then the launcher will be responsible for ensuring all the
sequences are synced. I think it would be better to handle
per-subscription work by the apply worker.

I thought we can give that task to sequence-sync worker. Once sequence
sync worker is started by launcher, it keeps on syncing until all the
sequences are synced (even failed ones) and then exits only after all
are synced; instead of apply worker starting it multiple times for
failed sequences. Launcher to start sequence sync worker when signaled
by 'alter-sub refresh seq'.
But after going through details given by Vignesh in [1]/messages/by-id/CALDaNm1KO8f3Fj+RHHXM=USGwOcW242M1jHee=X_chn2ToiCpw@mail.gmail.com, I also see
the benefits of using apply worker for this task. Since apply worker
is already looping and doing that for table-sync, we can reuse the
same code for sequence sync and maintenance will be easy. So looks
okay if we go with existing apply worker design.

[1]: /messages/by-id/CALDaNm1KO8f3Fj+RHHXM=USGwOcW242M1jHee=X_chn2ToiCpw@mail.gmail.com

Show quoted text

Going forward when we implement incremental sync of

sequences, then we may have apply worker started but now it is not
needed.

I believe the current method of having the apply worker initiate the
sequence sync worker is advantageous for several reasons:
a) Reduces Launcher Load: This approach prevents overloading the
launcher, which must handle various other subscription requests.
b) Facilitates Incremental Sync: It provides a more straightforward
path to extend support for incremental sequence synchronization.
c) Reuses Existing Code: It leverages the existing tablesync worker
code for starting the tablesync process, avoiding the need to
duplicate code in the launcher.
d) Simplified Code Maintenance: Centralizing sequence synchronization
logic within the apply worker can simplify code maintenance and
updates, as changes will only need to be made in one place rather than
across multiple components.
e) Better Monitoring and Debugging: With sequence synchronization
being handled by the apply worker, you can more effectively monitor
and debug synchronization processes since all related operations are
managed by a single component.

Also, I noticed that even when a publication has no tables, we create
replication slot and start apply worker.

As far as I understand slots and origins are primarily required for
incremental sync. Would it be used only for sequence-sync cases? If
not then we can avoid creating those. I agree that it would add some
complexity to the code with sequence-specific checks, so we can create
a top-up patch for this if required and evaluate its complexity versus
the benefit it produces.

--
With Regards,
Amit Kapila.

#113shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#112)
Re: Logical Replication of sequences

On Tue, Aug 6, 2024 at 8:49 AM shveta malik <shveta.malik@gmail.com> wrote:

Do we need some kind of coordination between table sync and sequence
sync for internally generated sequences? Lets say we have an identity
column with a 'GENERATED ALWAYS' sequence. When the sequence is synced
to subscriber, subscriber can also do an insert to table (extra one)
incrementing the sequence and then when publisher performs an insert,
apply worker will blindly copy that row to sub's table making identity
column's duplicate entries.

CREATE TABLE color ( color_id INT GENERATED ALWAYS AS
IDENTITY,color_name VARCHAR NOT NULL);

Pub: insert into color(color_name) values('red');

Sub: perform sequence refresh and check 'r' state is reached, then do insert:
insert into color(color_name) values('yellow');

Pub: insert into color(color_name) values('blue');

After above, data on Pub: (1, 'red') ;(2, 'blue'),

After above, data on Sub: (1, 'red') ;(2, 'yellow'); (2, 'blue'),

Identity column has duplicate values. Should the apply worker error
out while inserting such a row to the table? Or it is not in the
scope of this project?

thanks
Shveta

#114Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#112)
Re: Logical Replication of sequences

On Tue, Aug 6, 2024 at 8:49 AM shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Aug 5, 2024 at 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Aug 5, 2024 at 2:36 PM vignesh C <vignesh21@gmail.com> wrote:

On Fri, 2 Aug 2024 at 14:24, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Aug 1, 2024 at 9:26 AM shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jul 29, 2024 at 4:17 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for reporting this, these issues are fixed in the attached
v20240730_2 version patch.

I was reviewing the design of patch003, and I have a query. Do we need
to even start an apply worker and create replication slot when
subscription created is for 'sequences only'? IIUC, currently logical
replication apply worker is the one launching sequence-sync worker
whenever needed. I think it should be the launcher doing this job and
thus apply worker may even not be needed for current functionality of
sequence sync?

But that would lead to maintaining all sequence-sync of each
subscription by launcher. Say there are 100 sequences per subscription
and some of them from each subscription are failing due to some
reasons then the launcher will be responsible for ensuring all the
sequences are synced. I think it would be better to handle
per-subscription work by the apply worker.

I thought we can give that task to sequence-sync worker. Once sequence
sync worker is started by launcher, it keeps on syncing until all the
sequences are synced (even failed ones) and then exits only after all
are synced; instead of apply worker starting it multiple times for
failed sequences. Launcher to start sequence sync worker when signaled
by 'alter-sub refresh seq'.
But after going through details given by Vignesh in [1], I also see
the benefits of using apply worker for this task. Since apply worker
is already looping and doing that for table-sync, we can reuse the
same code for sequence sync and maintenance will be easy. So looks
okay if we go with existing apply worker design.

Fair enough. However, I was wondering whether apply_worker should exit
after syncing all sequences for a sequence-only subscription or should
it be there for future commands that can refresh the subscription and
add additional tables or sequences?

--
With Regards,
Amit Kapila.

#115shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#114)
Re: Logical Replication of sequences

On Tue, Aug 6, 2024 at 9:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 6, 2024 at 8:49 AM shveta malik <shveta.malik@gmail.com> wrote:

I was reviewing the design of patch003, and I have a query. Do we need
to even start an apply worker and create replication slot when
subscription created is for 'sequences only'? IIUC, currently logical
replication apply worker is the one launching sequence-sync worker
whenever needed. I think it should be the launcher doing this job and
thus apply worker may even not be needed for current functionality of
sequence sync?

But that would lead to maintaining all sequence-sync of each
subscription by launcher. Say there are 100 sequences per subscription
and some of them from each subscription are failing due to some
reasons then the launcher will be responsible for ensuring all the
sequences are synced. I think it would be better to handle
per-subscription work by the apply worker.

I thought we can give that task to sequence-sync worker. Once sequence
sync worker is started by launcher, it keeps on syncing until all the
sequences are synced (even failed ones) and then exits only after all
are synced; instead of apply worker starting it multiple times for
failed sequences. Launcher to start sequence sync worker when signaled
by 'alter-sub refresh seq'.
But after going through details given by Vignesh in [1], I also see
the benefits of using apply worker for this task. Since apply worker
is already looping and doing that for table-sync, we can reuse the
same code for sequence sync and maintenance will be easy. So looks
okay if we go with existing apply worker design.

Fair enough. However, I was wondering whether apply_worker should exit
after syncing all sequences for a sequence-only subscription

If apply worker exits, then on next sequence-refresh, we need a way to
wake-up launcher to start apply worker which then will start
table-sync worker. Instead, won't it be better if the launcher starts
table-sync worker directly without the need of apply worker being
present (which I stated earlier).

or should
it be there for future commands that can refresh the subscription and
add additional tables or sequences?

If we stick with apply worker starting table sync worker when needed
by continuously checking seq-sync states ('i'/'r'), then IMO, it is
better that apply-worker stays. But if we want apply-worker to exit
and start only when needed, then why not to start sequence-sync worker
directly for seq-only subscriptions?

thanks
Shveta

#116vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#115)
Re: Logical Replication of sequences

On Tue, 6 Aug 2024 at 10:24, shveta malik <shveta.malik@gmail.com> wrote:

On Tue, Aug 6, 2024 at 9:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 6, 2024 at 8:49 AM shveta malik <shveta.malik@gmail.com> wrote:

I was reviewing the design of patch003, and I have a query. Do we need
to even start an apply worker and create replication slot when
subscription created is for 'sequences only'? IIUC, currently logical
replication apply worker is the one launching sequence-sync worker
whenever needed. I think it should be the launcher doing this job and
thus apply worker may even not be needed for current functionality of
sequence sync?

But that would lead to maintaining all sequence-sync of each
subscription by launcher. Say there are 100 sequences per subscription
and some of them from each subscription are failing due to some
reasons then the launcher will be responsible for ensuring all the
sequences are synced. I think it would be better to handle
per-subscription work by the apply worker.

I thought we can give that task to sequence-sync worker. Once sequence
sync worker is started by launcher, it keeps on syncing until all the
sequences are synced (even failed ones) and then exits only after all
are synced; instead of apply worker starting it multiple times for
failed sequences. Launcher to start sequence sync worker when signaled
by 'alter-sub refresh seq'.
But after going through details given by Vignesh in [1], I also see
the benefits of using apply worker for this task. Since apply worker
is already looping and doing that for table-sync, we can reuse the
same code for sequence sync and maintenance will be easy. So looks
okay if we go with existing apply worker design.

Fair enough. However, I was wondering whether apply_worker should exit
after syncing all sequences for a sequence-only subscription

If apply worker exits, then on next sequence-refresh, we need a way to
wake-up launcher to start apply worker which then will start
table-sync worker. Instead, won't it be better if the launcher starts
table-sync worker directly without the need of apply worker being
present (which I stated earlier).

I favour the current design because it ensures the system remains
extendable for future incremental sequence synchronization. If the
launcher were responsible for starting the sequence sync worker, it
would add extra load that could hinder its ability to service other
subscriptions and complicate the design for supporting incremental
sync of sequences. Additionally, this approach offers the other
benefits mentioned in [1]/messages/by-id/CALDaNm1KO8f3Fj+RHHXM=USGwOcW242M1jHee=X_chn2ToiCpw@mail.gmail.com.

or should
it be there for future commands that can refresh the subscription and
add additional tables or sequences?

If we stick with apply worker starting table sync worker when needed
by continuously checking seq-sync states ('i'/'r'), then IMO, it is
better that apply-worker stays. But if we want apply-worker to exit
and start only when needed, then why not to start sequence-sync worker
directly for seq-only subscriptions?

There is a risk that sequence synchronization might fail if the
sequence value from the publisher falls outside the defined minvalue
or maxvalue range. The apply worker must be active to determine
whether to initiate the sequence sync worker after the
wal_retrieve_retry_interval period. Typically, publications consisting
solely of sequences are uncommon. However, if a user wishes to use
such publications, they can disable the subscription if necessary and
re-enable it when a sequence refresh is needed.

[1]: /messages/by-id/CALDaNm1KO8f3Fj+RHHXM=USGwOcW242M1jHee=X_chn2ToiCpw@mail.gmail.com

Regards,
Vignesh

#117Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#107)
1 attachment(s)
Re: Logical Replication of sequences

Here are some review comments for the patch v20240805_2-0003.

======
doc/src/sgml/catalogs.sgml

nitpick - removed the word "either"

======
doc/src/sgml/ref/alter_subscription.sgml

I felt the discussions about "how to handle warnings" are a bit scattered:
e.g.1 - ALTER SUBSCRIPTION REFRESH PUBLICATION copy data referred to
CREATE SUBSCRIPTION copy data.
e.g.2 - ALTER SUBSCRIPTION REFRESH explains what to do, but now the
explanation is in 2 places.
e.g.3 - CREATE SUBSCRIPTION copy data explains what to do (again), but
IMO it belongs better in the common "Notes" part

FYI, I've moved all the information to one place (in the CREATE
SUBSCRIPTION "Notes") and others refer to this central place. See the
attached nitpicks diff.

REFRESH PUBLICATION copy_data
nitpick - now refers to CREATE SUBSCRIPTION "Notes". I also moved it
to be nearer to the other sequence stuff.

REFRESH PUBLICATION SEQUENCES:
nitpick - now refers to CREATE SUBSCRIPTION "Notes".

======
doc/src/sgml/ref/create_subscription.sgml

REFRESH PUBLICATION copy_data
nitpick - now refers to CREATE SUBSCRIPTION "Notes"

Notes:
nitpick - the explanation of, and what to do about sequence WARNINGS,
is moved to here

======
src/backend/commands/sequence.c

pg_sequence_state:
nitpick - I just moved the comment in pg_sequence_state() to below the
NOTE, which talks about "page LSN".

======
src/backend/catalog/pg_subscription.c

1. HasSubscriptionRelations

Should function 'HasSubscriptionRelations' be renamed to
'HasSubscriptionTables'?

~~~

GetSubscriptionRelations:
nitpick - tweak some "skip" comments.

======
src/backend/commands/subscriptioncmds.c

2. CreateSubscription

  tables = fetch_table_list(wrconn, publications);
- foreach(lc, tables)
+ foreach_ptr(RangeVar, rv, tables)
+ {
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr, true);
+ }
+
+ /* Add the sequences in init state */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach_ptr(RangeVar, rv, sequences)

These 2 loops (first for tables and then for sequences) seem to be
executing the same code. If you wanted you could combine the lists
up-front, and then have one code loop instead of 2. It would mean less
code. OTOH, maybe the current code is more readable? I am not sure
what is best, so just bringing this to your attention.

~~~

AlterSubscription_refresh:
nitpick = typo /indicating tha/indicating that/

~~~

3. fetch_sequence_list

+ appendStringInfoString(&cmd, "SELECT DISTINCT n.nspname, c.relname,
s.seqtypid, s.seqmin, s.seqmax, s.seqstart, s.seqincrement,
s.seqcycle"
+    " FROM pg_publication p, LATERAL
pg_get_publication_sequences(p.pubname::text) gps(relid),"
+    " pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace JOIN
pg_sequence s ON c.oid = s.seqrelid"
+    " WHERE c.oid = gps.relid AND p.pubname IN (");
+ get_publications_str(publications, &cmd, true);
+ appendStringInfoChar(&cmd, ')');

Please wrap this better to make the SQL more readable.

~~

4.
+ if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||
+ seqform->seqmax != seqmax || seqform->seqstart != seqstart ||
+ seqform->seqincrement != seqincrement ||
+ seqform->seqcycle != seqcycle)
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("Sequence option in remote and local is not same for \"%s.%s\"",
+    get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid)),
+ errhint("Alter/Re-create the sequence using the same options as in remote."));

4a.
Are these really known as "options"? Or should they be called
"sequence parameters", or something else, like "sequence attributes"?

4a.
Is there a way to give more helpful information by identifying what
was different in the log? OTOH, maybe it would become too messy if
there were multiple differences...

======
src/backend/replication/logical/launcher.c

5. logicalrep_sync_worker_count

- if (isTablesyncWorker(w) && w->subid == subid)
+ if ((isTableSyncWorker(w) || isSequenceSyncWorker(w)) &&
+ w->subid == subid)

You could micro-optimize this -- it may be more efficient to write the
condition the other way around.

SUGGESTION
if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))

======
.../replication/logical/sequencesync.c

File header comment:
nitpick - there seems a large cut/paste mistake (the first 2
paragraphs are almost the same).
nitpick - reworded with the help of Chat-GPT for slightly better
clarity. Also fixed a couple of typos.
nitpick - it mentioned MAX_SEQUENCES_SYNC_PER_BATCH several times so I
changed the wording of one of them

~~~

fetch_remote_sequence_data:
nitpick - all other params have the same name as sequence members, so
change the parameter name /lsn/page_lsn/

~

copy_sequence:
nitpick - rename var /seq_lsn/seq_page_lsn/

======
src/backend/replication/logical/tablesync.c

6. process_syncing_sequences_for_apply

+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sync worker for the
+ * same relation after waiting at least wal_retrieve_retry_interval.

Why is it talking about "We start the sync worker for the same
relation ...". The sequencesync_failuretime is per sync worker, not
per relation. And, I don't see any 'same relation' check in the code.

======
src/include/catalog/pg_subscription_rel.h

GetSubscriptionRelations:
nitpick - changed parameter name /all_relations/all_states/

======
src/test/subscription/t/034_sequences.pl

nitpick - add some ########## comments to highlight the main test
parts to make it easier to read.
nitpick - fix typo /syned/synced/

7. More test cases?
IIUC you can also get a sequence mismatch warning during "ALTER ...
REFRESH PUBLICATION", and "CREATE SUBSCRIPTION". So, should those be
tested also?

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240806_SEQ_0003.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240806_SEQ_0003.txtDownload
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 16c427e..5c66797 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8109,7 +8109,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
 
   <para>
    This catalog only contains tables and sequences known to the subscription
-   after running either
+   after running
    <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
   <link linkend="sql-altersubscription-params-refresh-publication">
    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index eb7d544..f280019 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -200,6 +200,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
          </para>
          <para>
+          See <xref linkend="sql-createsubscription-notes"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
+         <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
           <link linkend="sql-createsubscription-params-with-origin"><literal>origin</literal></link>
@@ -211,11 +217,6 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           parameter of <command>CREATE SUBSCRIPTION</command> for details about
           copying pre-existing data in binary format.
          </para>
-         <para>
-          See the <link linkend="sql-createsubscription-params-with-copy-data"><literal>copy_data</literal></link>
-          on how to handle the warnings regarding the difference in sequence
-          definition between the publisher and the subscriber.
-         </para>
         </listitem>
        </varlistentry>
       </variablelist></para>
@@ -230,12 +231,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
       <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
       only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
-      will re-synchronize the sequence data for all subscribed sequences. The
-      sequence definition can differ between the publisher and the subscriber,
-      this is detected and a WARNING is logged to the user, but the warning is
-      only an indication of a potential problem; it is recommended to alter the
-      sequence to keep the sequence option same as the publisher and execute
-      the command again.
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sql-createsubscription-notes"/> for recommendations on how
+      to handle any warnings about differences in the sequence definition
+      between the publisher and the subscriber.
      </para>
     </listitem>
    </varlistentry>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index de3bdb8..e28ed96 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -264,12 +264,10 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>origin</literal> parameter.
          </para>
          <para>
-          The sequence definition can differ between the publisher and the
-          subscriber, this is detected and a WARNING is logged to the user, but
-          the warning is only an indication of a potential problem; it is
-          recommended to alter the sequence to keep the sequence option same as
-          the publisher and execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
-          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+          See <xref linkend="sql-createsubscription-notes"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -543,6 +541,17 @@ WHERE N.nspname = PT.schemaname AND
       PT.pubname IN (&lt;pub-names&gt;);
 </programlisting></para>
 
+  <para>
+   Sequence definitions can differ between the publisher and the subscriber.
+   If this is detected, a WARNING is logged to inform the user of a
+   potential problem. It is recommended to use
+   <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+   to keep the subscriber sequence parameters the same as the publisher
+   sequence parameters. Then, execute
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+  </para>
+
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 8a2161e..876ab96 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -562,7 +562,7 @@ GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
 			if (!get_sequences)
 				continue;
 
-			/* Skip all non-init sequences if not all_states were requested */
+			/* Skip all non-init sequences unless all_states was requested */
 			if (!all_states && (subrel->srsubstate != SUBREL_STATE_INIT))
 				continue;
 		}
@@ -572,7 +572,7 @@ GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
 			if (!get_tables)
 				continue;
 
-			/* Skip all ready tables if not all_states were requested */
+			/* Skip all ready tables unless all_states was requested */
 			if (!all_states && (subrel->srsubstate == SUBREL_STATE_READY))
 				continue;
 		}
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ec7d5bb..322eb71 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -1898,13 +1898,13 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ *
  * The page_lsn allows the user to determine if the sequence has been updated
  * since the last synchronization with the subscriber. This is done by
  * comparing the current page_lsn with the value stored in pg_subscription_rel
  * from the last synchronization.
- *
- * Note: This is roughly equivalent to selecting the data from the sequence,
- * except that it also returns the page LSN.
  */
 Datum
 pg_sequence_state(PG_FUNCTION_ARGS)
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 2a5c8c5..44fa6ac 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -18,30 +18,19 @@
  * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCE
  *
- * Sequencesync worker will get the sequences that should be synchronized from
- * pg_subscription_rel catalog table. It synchronizes
- * MAX_SEQUENCES_SYNC_PER_BATCH (100) sequences within a single transaction by
- * getting the sequence value from the remote publisher and updating it to the
- * local subscriber sequence and updates the seqeunce state to READY. It also
- * updates the remote sequence's lsn to pg_subscription_rel which can be
- * later used to compare it with the pg_sequence_state page_lsn value to
- * identify if sequence is changed since the last synchronization.
- *
- * The sequencesync worker retrieves the sequences that need to be synchronized
- * from the pg_subscription_rel catalog table. It synchronizes up to
- * MAX_SEQUENCES_SYNC_PER_BATCH (100) sequences in a single transaction by
- * fetching the sequence values and the sequence's page_lsn from the remote
- * publisher and updating them in the local subscriber sequence. After
- * synchronization, it sets the sequence state to READY. This LSN can later be
- * compared with the pg_sequence_state page LSN value to determine if the
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.  The page LSN can
+ * later be compared with the pg_sequence_state page_lsn to determine if the
  * sequence has changed since the last synchronization.
  *
  * So the state progression is always just: INIT -> READY.
  *
- * Here MAX_SEQUENCES_SYNC_PER_BATCH (100) sequences are synchronized within a
- * single transaction  to avoid creating a lot of transactions and also the
- * locks on the sequence relation will be periodically released during the
- * commit transaction.
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
  *
  *-------------------------------------------------------------------------
  */
@@ -70,12 +59,12 @@
  *
  * The sequence last_value will be returned directly, while
  * log_cnt, is_called and page_lsn will be returned via the output
- * parameters log_cnt, is_called and lsn, respectively.
+ * parameters log_cnt, is_called and page_lsn, respectively.
  */
 static int64
 fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
 						   char *relname, int64 *log_cnt, bool *is_called,
-						   XLogRecPtr *lsn)
+						   XLogRecPtr *page_lsn)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
@@ -114,7 +103,7 @@ fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
 	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
 	Assert(!isnull);
 
-	*lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
 	Assert(!isnull);
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -138,7 +127,7 @@ copy_sequence(WalReceiverConn *conn, Relation rel)
 	int64		seq_last_value;
 	int64		seq_log_cnt;
 	bool		seq_is_called;
-	XLogRecPtr	seq_lsn = InvalidXLogRecPtr;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
 	WalRcvExecResult *res;
 	Oid			tableRow[] = {OIDOID, CHAROID};
 	TupleTableSlot *slot;
@@ -185,13 +174,13 @@ copy_sequence(WalReceiverConn *conn, Relation rel)
 
 	seq_last_value = fetch_remote_sequence_data(conn, remoteid, nspname,
 												relname, &seq_log_cnt, &seq_is_called,
-												&seq_lsn);
+												&seq_page_lsn);
 
 	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
 				seq_log_cnt);
 
 	/* return the LSN when the sequence state was set */
-	return seq_lsn;
+	return seq_page_lsn;
 }
 
 /*
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 58abed9..1c954c9 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -92,6 +92,6 @@ extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 extern bool HasSubscriptionRelations(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
 									  bool get_sequences,
-									  bool all_relations);
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
index 88f2705..100a420 100644
--- a/src/test/subscription/t/034_sequences.pl
+++ b/src/test/subscription/t/034_sequences.pl
@@ -67,9 +67,11 @@ my $result = $node_subscriber->safe_psql(
 ));
 is($result, '100|32|t', 'initial test data replicated');
 
-# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
 # sequences of the publisher, but changes to existing sequences should
 # not be synced.
+##########
 
 # Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
 $node_publisher->safe_psql(
@@ -105,9 +107,11 @@ $result = $node_subscriber->safe_psql(
 is($result, '100|32|t',
 	'REFRESH PUBLICATION will sync newly published sequence');
 
-# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
 # new sequences of the publisher, and changes to existing sequences should
 # also be synced.
+##########
 
 # Create a new sequence 'regress_s3', and update the existing sequence
 # 'regress_s2'.
@@ -128,7 +132,7 @@ $result = $node_subscriber->safe_psql(
 $node_subscriber->poll_query_until('postgres', $synced_query)
   or die "Timed out while waiting for subscriber to synchronize data";
 
-# Check - existing sequences are syned
+# Check - existing sequences are synced
 $result = $node_subscriber->safe_psql(
 	'postgres', qq(
 	SELECT last_value, log_cnt, is_called FROM regress_s1;
@@ -150,8 +154,10 @@ $result = $node_subscriber->safe_psql(
 is($result, '100|32|t',
 	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
 
+##########
 # ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
 # for sequence definition not matching between the publisher and the subscriber.
+##########
 
 # Create a new sequence 'regress_s4' whose START value is not the same in the
 # publisher and subscriber.
@@ -165,6 +171,7 @@ $node_subscriber->safe_psql(
 	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
 ));
 
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
 ($result, my $stdout, my $stderr) = $node_subscriber->psql(
 	'postgres', "
         ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES");
#118vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#111)
Re: Logical Replication of sequences

On Mon, 5 Aug 2024 at 18:05, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Aug 5, 2024 at 11:04 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 31 Jul 2024 at 14:39, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jun 10, 2024 at 5:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 12:24, Amul Sul <sulamul@gmail.com> wrote:

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:
[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm missing
something.

We'll require the lsn because the sequence LSN informs the user that
it has been synchronized up to the LSN in pg_subscription_seq. Since
we are not supporting incremental sync, the user will be able to
identify if he should run refresh sequences or not by checking the lsn
of the pg_subscription_seq and the lsn of the sequence(using
pg_sequence_state added) in the publisher.

How the user will know from seq's lsn that he needs to run refresh.
lsn indicates page_lsn and thus the sequence might advance on pub
without changing lsn and thus lsn may look the same on subscriber even
though a sequence-refresh is needed. Am I missing something here?

When a sequence is synchronized to the subscriber, the page LSN of the
sequence from the publisher is also retrieved and stored in
pg_subscriber_rel as shown below:
--- Publisher page lsn
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
--------------------
(0/1510E38,65,1,t)
(1 row)
--- Subscriber stores the publisher's page lsn for the sequence
subscriber=# select * from pg_subscription_rel where srrelid = 16384;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16389 |   16384 | r          | 0/1510E38
(1 row)
If changes are made to the sequence, such as performing many nextvals,
the page LSN will be updated. Currently the sequence values are
prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for
the prefetched values, once the prefetched values are consumed the lsn
will get updated.
For example:
--- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
----------------------
(0/1558CA8,143,22,t)
(1 row)

The user can then compare this updated value with the sequence's LSN
in pg_subscription_rel to determine when to re-synchronize the
sequence.

Thanks for the details. But I was referring to the case where we are
in between pre-fetched values on publisher (say at 25th value), while
on subscriber we are slightly behind (say at 15th value), but page-lsn
will be the same on both. Since the subscriber is behind, a
sequence-refresh is needed on sub, but by looking at lsn (which is
same), one can not say that for sure. Let me know if I have
misunderstood it.

Yes, at present, if the value is within the pre-fetched range, we
cannot distinguish it solely using the page_lsn. However, the
pg_sequence_state function also provides last_value and log_cnt, which
can be used to handle these specific cases.

Regards,
Vignesh

#119Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#118)
Re: Logical Replication of sequences

On Tue, Aug 6, 2024 at 5:13 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 5 Aug 2024 at 18:05, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Aug 5, 2024 at 11:04 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 31 Jul 2024 at 14:39, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jun 10, 2024 at 5:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 12:24, Amul Sul <sulamul@gmail.com> wrote:

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:
[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm missing
something.

We'll require the lsn because the sequence LSN informs the user that
it has been synchronized up to the LSN in pg_subscription_seq. Since
we are not supporting incremental sync, the user will be able to
identify if he should run refresh sequences or not by checking the lsn
of the pg_subscription_seq and the lsn of the sequence(using
pg_sequence_state added) in the publisher.

How the user will know from seq's lsn that he needs to run refresh.
lsn indicates page_lsn and thus the sequence might advance on pub
without changing lsn and thus lsn may look the same on subscriber even
though a sequence-refresh is needed. Am I missing something here?

When a sequence is synchronized to the subscriber, the page LSN of the
sequence from the publisher is also retrieved and stored in
pg_subscriber_rel as shown below:
--- Publisher page lsn
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
--------------------
(0/1510E38,65,1,t)
(1 row)
--- Subscriber stores the publisher's page lsn for the sequence
subscriber=# select * from pg_subscription_rel where srrelid = 16384;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16389 |   16384 | r          | 0/1510E38
(1 row)
If changes are made to the sequence, such as performing many nextvals,
the page LSN will be updated. Currently the sequence values are
prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for
the prefetched values, once the prefetched values are consumed the lsn
will get updated.
For example:
--- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
----------------------
(0/1558CA8,143,22,t)
(1 row)

The user can then compare this updated value with the sequence's LSN
in pg_subscription_rel to determine when to re-synchronize the
sequence.

Thanks for the details. But I was referring to the case where we are
in between pre-fetched values on publisher (say at 25th value), while
on subscriber we are slightly behind (say at 15th value), but page-lsn
will be the same on both. Since the subscriber is behind, a
sequence-refresh is needed on sub, but by looking at lsn (which is
same), one can not say that for sure. Let me know if I have
misunderstood it.

Yes, at present, if the value is within the pre-fetched range, we
cannot distinguish it solely using the page_lsn.

This makes sense to me.

However, the
pg_sequence_state function also provides last_value and log_cnt, which
can be used to handle these specific cases.

BTW, can we document all these steps for users to know when to refresh
the sequences, if not already documented?

--
With Regards,
Amit Kapila.

#120Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#79)
Re: Logical Replication of sequences

Hi Vignesh,

This is mostly a repeat of my previous mail from a while ago [1]/messages/by-id/CAHut+PuFH1OCj-P1UKoRQE2X4-0zMG+N1V7jdn=tOQV4RNbAbw@mail.gmail.com but
includes some corrections, answers, and more examples. I'm going to
try to persuade one last time because the current patch is becoming
stable, so I wanted to revisit this syntax proposal before it gets too
late to change anything.

If there is some problem with the proposed idea please let me know
because I can see only the advantages and no disadvantages of doing it
this way.

~~~

The current patchset offers two forms of subscription refresh:
1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option
[= value] [, ... ] ) ]
2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES

Since 'copy_data' is the only supported refresh_option, really it is more like:
1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( copy_data [=
true|false] ) ]
2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES

~~~

I proposed previously that instead of having 2 commands for refreshing
subscriptions we should have a single refresh command:

ALTER SUBSCRIPTION name REFRESH PUBLICATION [TABLES|SEQUENCES] [ WITH
( copy_data [= true|false] ) ]

Why?

- IMO it is less confusing than having 2 commands that both refresh
sequences in slightly different ways

- It is more flexible because apart from refreshing everything, a user
can choose to refresh only tables or only sequences if desired; IMO
more flexibility is always good.

- There is no loss of functionality from the current implementation
AFAICT. You can still say "ALTER SUBSCRIPTION sub REFRESH PUBLICATION
SEQUENCES" exactly the same as the patchset allows.

- The implementation code will become simpler. For example, the
current implementation of AlterSubscription_refresh(...) includes the
(hacky?) 'resync_all_sequences' parameter and has an overcomplicated
relationship with other parameters as demonstrated by the assertions
below. IMO using the prosed syntax means this coding will become not
only simpler, but shorter too.
+ /* resync_all_sequences cannot be specified with refresh_tables */
+ Assert(!(resync_all_sequences && refresh_tables));
+
+ /* resync_all_sequences cannot be specified with copy_data as false */
+ Assert(!(resync_all_sequences && !copy_data));

~~~

So, to continue this proposal, let the meaning of 'copy_data' for
SEQUENCES be as follows:

- when copy_data == false: it means don't copy data (i.e. don't
synchronize anything). Add/remove sequences from pg_subscriber_rel as
needed.

- when copy_data == true: it means to copy data (i.e. synchronize) for
all sequences. Add/remove sequences from pg_subscriber_rel as needed)

~~~

EXAMPLES using the proposed syntax:

Refreshing TABLES only...

ex1.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = false)
- same as PG17 functionality for "ALTER SUBSCRIPTION sub REFRESH
PUBLICATION WITH (copy_data = false)"

ex2.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES WITH (copy_data = true)
- same as PG17 functionality for "ALTER SUBSCRIPTION sub REFRESH
PUBLICATION WITH (copy_data = true)"

ex3. (using default copy_data)
ALTER SUBSCRIPTION sub REFRESH PUBLICATION TABLES
- same as ex2.

~

Refreshing SEQUENCES only...

ex4.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = false)
- this adds/removes only sequences to pg_subscription_rel but doesn't
update the sequence values

ex5.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES WITH (copy data = true)
- this adds/removes only sequences to pg_subscription_rel and also
updates (synchronizes) all sequence values.
- same functionality as "ALTER SUBSCRIPTION sub REFRESH PUBLICATION
SEQUENCES" in your current patchset

ex6. (using default copy_data)
ALTER SUBSCRIPTION sub REFRESH PUBLICATION SEQUENCES
- same as ex5.
- note, that this command has the same syntax and functionality as the
current patchset

~~~

When no object_type is specified it has intuitive meaning to refresh
both TABLES and SEQUENCES...

ex7.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = false)
- For tables, it is the same as the PG17 functionality
- For sequences it includes the same behaviour of ex4.

ex8.
ALTER SUBSCRIPTION sub REFRESH PUBLICATION WITH (copy_data = true)
- For tables, it is the same as the PG17 functionality
- For sequences it includes the same behaviour of ex5.
- There is one subtle difference from the current patchset because
this proposal will synchronize *all* sequences instead of only new
ones. But, this is a good thing. The current documentation is
complicated by having to explain the differences between REFRESH
PUBLICATION and REFRESH PUBLICATION SEQUENCES. The current patchset
also raises questions like how the user chooses whether to use
"REFRESH PUBLICATION SEQUENCES" versus "REFRESH PUBLICATION WITH
(copy_data=true)". OTHO, the proposed syntax eliminates ambiguity.

ex9. (using default copy_data)
ALTER SUBSCRIPTION sub REFRESH PUBLICATION
- same as ex8

======
[1]: /messages/by-id/CAHut+PuFH1OCj-P1UKoRQE2X4-0zMG+N1V7jdn=tOQV4RNbAbw@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

#121shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#105)
Re: Logical Replication of sequences

On Mon, Aug 5, 2024 at 10:26 AM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 1 Aug 2024 at 04:25, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

I noticed that when replicating sequences (using the latest patches
0730_2*) the subscriber-side checks the *existence* of the sequence,
but apparently it is not checking other sequence attributes.

For example, consider:

Publisher: "CREATE SEQUENCE s1 START 1 INCREMENT 2;" should be a
sequence of only odd numbers.
Subscriber: "CREATE SEQUENCE s1 START 2 INCREMENT 2;" should be a
sequence of only even numbers.

Because the names match, currently the patch allows replication of the
s1 sequence. I think that might lead to unexpected results on the
subscriber. IMO it might be safer to report ERROR unless the sequences
match properly (i.e. not just a name check).

Below is a demonstration the problem:

==========
Publisher:
==========

(publisher sequence is odd numbers)

test_pub=# create sequence s1 start 1 increment 2;
CREATE SEQUENCE
test_pub=# select * from nextval('s1');
nextval
---------
1
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
3
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
5
(1 row)

test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
CREATE PUBLICATION
test_pub=#

==========
Subscriber:
==========

(subscriber sequence is even numbers)

test_sub=# create sequence s1 start 2 increment 2;
CREATE SEQUENCE
test_sub=# SELECT * FROM nextval('s1');
nextval
---------
2
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
4
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
6
(1 row)

test_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'
PUBLICATION pub1;
2024-08-01 08:43:04.198 AEST [24325] WARNING: subscriptions created
by regression test cases should have names starting with "regress_"
WARNING: subscriptions created by regression test cases should have
names starting with "regress_"
NOTICE: created replication slot "sub1" on publisher
CREATE SUBSCRIPTION
test_sub=# 2024-08-01 08:43:04.294 AEST [26240] LOG: logical
replication apply worker for subscription "sub1" has started
2024-08-01 08:43:04.309 AEST [26244] LOG: logical replication
sequence synchronization worker for subscription "sub1" has started
2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication
synchronization for subscription "sub1", sequence "s1" has finished
2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication
sequence synchronization worker for subscription "sub1" has finished

(after the CREATE SUBSCRIPTION we are getting replicated odd values
from the publisher, even though the subscriber side sequence was
supposed to be even numbers)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
7
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
9
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
11
(1 row)

(Looking at the description you would expect odd values for this
sequence to be impossible)

I see that for such even sequences, user can still do 'setval' to a
odd number and then nextval will keep on returning odd value.

postgres=# SELECT nextval('s1');
6

postgres=SELECT setval('s1', 43);
43

postgres=# SELECT nextval('s1');
45

test_sub=# \dS+ s1
Sequence "public.s1"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 2 | 1 | 9223372036854775807 | 2 | no | 1

Even if we check the sequence definition during the CREATE
SUBSCRIPTION/ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES commands, there's still
a chance that the sequence definition might change after the command
has been executed. Currently, there's no mechanism to lock a sequence,
and we also permit replication of table data even if the table
structures differ, such as mismatched data types like int and
smallint. I have modified it to log a warning to inform users that the
sequence options on the publisher and subscriber are not the same and
advise them to ensure that the sequence definitions are consistent
between both.
The v20240805 version patch attached at [1] has the changes for the same.
[1] - /messages/by-id/CALDaNm1Y_ot-jFRfmtwDuwmFrgSSYHjVuy28RspSopTtwzXy8w@mail.gmail.com

The behavior for applying is no different from setval. Having said
that, I agree that sequence definition can change even after the
subscription creation, but earlier we were not syncing sequences and
thus the value of a particular sequence was going to remain in the
range/pattern defined by its attributes unless user sets it manually
using setval. But now, it is being changed in the background without
user's knowledge.
The table case is different. In case of table replication, if we have
CHECK constraint or say primary-key etc, then the value which violates
these constraints will never be inserted to a table even during
replication on sub. For sequences, parameters (MIN,MAX, START,
INCREMENT) can be considered similar to check-constraints, the only
difference is during apply, we are still overriding these and copying
pub's value. May be such inconsistencies detection can be targeted
later in next project. But for the time being, it will be good to add
a 'caveat' section in doc mentioning all such cases. The scope of this
project should be clearly documented.

thanks
Shveta

#122vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#117)
3 attachment(s)
Re: Logical Replication of sequences

On Tue, 6 Aug 2024 at 14:38, Peter Smith <smithpb2250@gmail.com> wrote:

Here are some review comments for the patch v20240805_2-0003.

4a.
Is there a way to give more helpful information by identifying what
was different in the log? OTOH, maybe it would become too messy if
there were multiple differences...

I had considered this while implementing and did not implement it to
print each of the parameters because it will be too messy. I felt
existing is better.

7. More test cases?
IIUC you can also get a sequence mismatch warning during "ALTER ...
REFRESH PUBLICATION", and "CREATE SUBSCRIPTION". So, should those be
tested also?

Since it won't add any extra coverage, I feel no need to add this test.

The remaining comments have been addressed, and the changes are
included in the attached v20240807 version patch.

Regards,
Vignesh

Attachments:

v20240807-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240807-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 30ae921c907db078354068ad7aa77f8c4203195b Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240807 1/3] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 0f7154b76a..ca5be43283 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19582,6 +19582,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d36f6001bb..7997b841cb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240807-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240807-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 5f4cd000a10b8baaca2e725f699195b2de211dde Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240807 2/3] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  37 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 697 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..783874fb75 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,13 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..9b3cad1cac 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10561,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10586,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10704,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19439,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..0ce48da963 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240807-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240807-0003-Enhance-sequence-synchronization-during-su.patchDownload
From d30f502171adaa31dbcd2970ca0845b0897ef297 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 5 Aug 2024 12:18:31 +0530
Subject: [PATCH v20240807 3/3] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 doc/src/sgml/catalogs.sgml                    |  19 +-
 doc/src/sgml/config.sgml                      |   4 +-
 doc/src/sgml/logical-replication.sgml         |  14 +-
 doc/src/sgml/monitoring.sgml                  |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml      |  61 ++-
 doc/src/sgml/ref/create_subscription.sgml     |  16 +
 doc/src/sgml/system-views.sgml                |  67 +++
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  83 +++-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 324 +++++++++++---
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/launcher.c    |  70 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 406 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 238 +++++++---
 src/backend/replication/logical/worker.c      |  74 +++-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   6 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  31 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 185 ++++++++
 32 files changed, 1529 insertions(+), 210 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..5c66797d4d 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+  <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a1a1d58a43..379fb4cc13 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..99652c6690 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1683,10 +1683,12 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
      subscriber.  If the subscriber is used as a read-only database, then this
      should typically not be a problem.  If, however, some kind of switchover
      or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -1999,8 +2001,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..d097680e5d 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sql-createsubscription-notes"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,34 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      Sequence values may occasionally become out of sync due to updates in the
+      publisher. To verify this, compare the
+      <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsublsn</structfield>
+      on the subscriber with the page_lsn obtained from the
+      <function>pg_sequence_state</function> for the sequence on the publisher.
+      If the sequence is still using prefetched values, the page_lsn will not be
+      updated. In such cases, you will need to directly compare the sequences
+      and execute <literal>REFRESH PUBLICATION SEQUENCES</literal> if required.
+     </para>
+     <para>
+      See <xref linkend="sql-createsubscription-notes"/> for recommendations on how
+      to handle any warnings about differences in the sequence definition
+      between the publisher and the subscriber.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..e23edd7a2f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sql-createsubscription-notes"/> for recommendations
+          on how to handle any warnings about differences in the sequence
+          definition between the publisher and the subscriber, which might occur
+          when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -535,6 +541,16 @@ WHERE N.nspname = PT.schemaname AND
       PT.pubname IN (&lt;pub-names&gt;);
 </programlisting></para>
 
+  <para>
+   Sequence definitions can differ between the publisher and the subscriber.
+   If this is detected, a WARNING is logged to inform the user of a
+   potential problem. It is recommended to use
+   <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+   to keep the subscriber sequence parameters the same as the publisher
+   sequence parameters. Then, execute
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a0b692bf1e..8373ebc5b0 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2185,6 +2190,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..4e2f96058f 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,18 +460,19 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,35 +511,41 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached 'READY' state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in 'init' state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
 	HeapTuple	tup;
 	int			nkeys = 0;
-	ScanKeyData skey[2];
+	ScanKeyData skey;
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
-	ScanKeyInit(&skey[nkeys++],
+	ScanKeyInit(&skey,
 				Anum_pg_subscription_rel_srsubid,
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
-		ScanKeyInit(&skey[nkeys++],
-					Anum_pg_subscription_rel_srsubstate,
-					BTEqualStrategyNumber, F_CHARNE,
-					CharGetDatum(SUBREL_STATE_READY));
-
 	scan = systable_beginscan(rel, InvalidOid, false,
-							  NULL, nkeys, skey);
+							  NULL, nkeys, &skey);
 
 	while (HeapTupleIsValid(tup = systable_getnext(scan)))
 	{
@@ -532,6 +556,27 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE)
+		{
+			/* Skip sequences if they were not requested */
+			if (!get_sequences)
+				continue;
+
+			/* Skip all non-init sequences unless all_states was requested */
+			if (!all_states && (subrel->srsubstate != SUBREL_STATE_INIT))
+				continue;
+		}
+		else
+		{
+			/* Skip tables if they were not requested */
+			if (!get_tables)
+				continue;
+
+			/* Skip all ready tables unless all_states was requested */
+			if (!all_states && (subrel->srsubstate == SUBREL_STATE_READY))
+				continue;
+		}
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..322eb71965 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1898,6 +1900,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
  *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
+ *
+ * The page_lsn allows the user to determine if the sequence has been updated
+ * since the last synchronization with the subscriber. This is done by
+ * comparing the current page_lsn with the value stored in pg_subscription_rel
+ * from the last synchronization.
  */
 Datum
 pg_sequence_state(PG_FUNCTION_ARGS)
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..6b320b1111 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX todo: If the subscription is for a sequence-only publication,
+	 * creating this origin is unnecessary at this point. It can be created
+	 * later during the ALTER SUBSCRIPTION ... REFRESH command, if the
+	 * publication is updated to include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool	   hastables = false;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -765,10 +774,15 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Get the table list from publisher and build local table status
 			 * info.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			if (relations != NIL)
+				hastables = true;
+
+			/* Include the sequence list from publisher. */
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +799,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX todo: If the subscription is for a sequence-only publication,
+			 * creating this slot is not necessary at the moment. It can be
+			 * created during the ALTER SUBSCRIPTION ... REFRESH command if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +827,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && hastables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +866,35 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * If 'copy_data' parameter is true, the function will set the state
+ * to "init"; otherwise, it will set the state to "ready".
+ *
+ * When 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * If 'resync_all_sequences' is true, mark all objects with "init" state
+ * for re-synchronization; otherwise, only update the newly added tables and
+ * sequences based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +912,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+	/* resync_all_sequences cannot be specified with refresh_tables */
+	Assert(!(resync_all_sequences && refresh_tables));
+
+	/* resync_all_sequences cannot be specified with copy_data as false */
+	Assert(!(resync_all_sequences && !copy_data));
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +937,16 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +965,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +990,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,7 +1007,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 rv->schemaname, rv->relname, sub->name)));
 			}
 		}
@@ -968,11 +1025,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(LOG,
+							(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											 get_namespace_name(get_rel_namespace(relid)),
+											 get_rel_name(relid),
+											 sub->name)));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,38 +1071,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
@@ -1039,6 +1126,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1515,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1530,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1571,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1590,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1646,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1888,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1915,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2273,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2451,109 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[8] = {TEXTOID, TEXTOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT n.nspname, c.relname, s.seqtypid,\n"
+						   "s.seqmin, s.seqmax, s.seqstart, s.seqincrement, s.seqcycle\n"
+						   "FROM pg_publication p,\n"
+						   "      LATERAL pg_get_publication_sequences(p.pubname::text) gps(relid), pg_class c\n"
+						   "      JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						   "      JOIN pg_sequence s ON c.oid = s.seqrelid\n"
+						   "WHERE c.oid = gps.relid AND p.pubname IN (\n");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 8, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		Oid			seqtypid;
+		int64		seqmin;
+		int64		seqmax;
+		int64		seqstart;
+		int64		seqincrement;
+		bool		seqcycle;
+		bool		isnull;
+		RangeVar   *rv;
+		Oid			relid;
+		HeapTuple	tup;
+		Form_pg_sequence seqform;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+		seqtypid = DatumGetObjectId(slot_getattr(slot, 3, &isnull));
+		Assert(!isnull);
+		seqmin = DatumGetInt64(slot_getattr(slot, 4, &isnull));
+		Assert(!isnull);
+		seqmax = DatumGetInt64(slot_getattr(slot, 5, &isnull));
+		Assert(!isnull);
+		seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+		Assert(!isnull);
+		seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+		Assert(!isnull);
+		seqcycle = DatumGetBool(slot_getattr(slot, 8, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+		/* Get the local sequence */
+		tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(tup))
+			elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+				 get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid));
+
+		seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+		if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||
+			seqform->seqmax != seqmax || seqform->seqstart != seqstart ||
+			seqform->seqincrement != seqincrement ||
+			seqform->seqcycle != seqcycle)
+			ereport(WARNING,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("Sequence parameter in remote and local is not same for \"%s.%s\"",
+						   get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid)),
+					errhint("Alter/Re-create the sequence using the same parameter as in remote."));
+
+		ReleaseSysCache(tup);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9b3cad1cac..28b772df32 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10835,11 +10835,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..f8dd93a83a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..cbe7c814ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..2935c5309e
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,406 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * Apply worker will periodically check if there are any sequences in INIT
+ * state and start a sequencesync worker.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.  The page LSN can
+ * later be compared with the pg_sequence_state page_lsn to determine if the
+ * sequence has changed since the last synchronization.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequence sync worker. However, the approach of having the apply worker
+ * manage the sequence sync worker was chosen for the following reasons: a) It
+ * avoids overloading the launcher, which handles various other subscription
+ * requests. b) It offers a more straightforward path for extending support for
+ * incremental sequence synchronization. c) It utilizes the existing tablesync
+ * worker code to start the sequencesync process, thus preventing code
+ * duplication in the launcher. d) It simplifies code maintenance by
+ * consolidating changes to a single location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieve sequence data (last_value, log_cnt, page_lsn and is_called)
+ * from the remote node.
+ *
+ * The sequence last_value will be returned directly, while
+ * log_cnt, is_called and page_lsn will be returned via the output
+ * parameters log_cnt, is_called and page_lsn, respectively.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
+						   char *relname, int64 *log_cnt, bool *is_called,
+						   XLogRecPtr *page_lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[4] = {INT8OID, INT8OID, BOOLOID, LSNOID};
+	int64		last_value = (Datum) 0;
+	bool		isnull;
+
+	initStringInfo(&cmd);
+
+	appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called, page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 4, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return last_value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	seq_last_value = fetch_remote_sequence_data(conn, remoteid, nspname,
+												relname, &seq_log_cnt, &seq_is_called,
+												&seq_page_lsn);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..5800f2160c 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -130,19 +130,22 @@ typedef enum
 	SYNC_TABLE_STATE_VALID,
 } SyncingTablesState;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+static SyncingTablesState relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+static List *sequence_states_not_ready = NIL;
+static bool FetchTableStates(void);
 
 static StringInfo copybuf = NULL;
 
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -157,15 +160,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -205,7 +217,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -252,7 +264,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -278,9 +290,9 @@ wait_for_worker_state_change(char expected_state)
  * Callback from syscache invalidation.
  */
 void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 }
 
 /*
@@ -387,7 +399,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -429,9 +441,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -464,6 +473,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -477,11 +494,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -518,7 +530,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -661,10 +674,108 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequence sync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync worker and/or sequencesync worker for the newly added
+ * relations.
  */
 void
-process_syncing_tables(XLogRecPtr current_lsn)
+process_syncing_relations(XLogRecPtr current_lsn)
 {
 	switch (MyLogicalRepWorker->type)
 	{
@@ -682,7 +793,20 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchTableStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -1320,7 +1444,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1564,39 +1688,48 @@ copy_table_done:
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * Copy tables that are not READY state into table_states_not_ready, and sequences
+ * that have INIT state into sequence_states_not_ready. The pg_subscription_rel
+ * catalog is shared by tables and sequences. Changes to either sequences or
+ * tables can affect the validity of relation states, so we update both
+ * table_states_not_ready and sequence_states_not_ready simultaneously
+ * to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 static bool
-FetchTableStates(bool *started_tx)
+FetchTableStates(void)
 {
 	static bool has_subrels = false;
+	bool		started_tx = false;
 
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_TABLE_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/*
+		 * Fetch tables that are in non-ready state, and sequences that are in
+		 * init state.
+		 */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -1604,19 +1737,23 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
 		/*
 		 * Does the subscription have tables?
 		 *
-		 * If there were not-READY relations found then we know it does. But
+		 * If there were not-READY tables found then we know it does. But
 		 * if table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
 		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
+			HasSubscriptionTables(MySubscription->oid);
 
 		/*
 		 * If the subscription relation cache has been invalidated since we
@@ -1625,8 +1762,14 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
 	}
 
 	return has_subrels;
@@ -1709,7 +1852,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1717,7 +1860,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1731,17 +1874,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchTableStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6dc54c7283..2e84b24617 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,8 +1030,11 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1137,8 +1145,11 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1193,8 +1204,11 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1254,8 +1268,11 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1381,8 +1398,11 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2223,8 +2243,11 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3621,8 +3644,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
+			process_syncing_relations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4531,8 +4557,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4611,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4619,14 +4649,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4656,8 +4689,11 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 7997b841cb..899f0299b8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12007,6 +12007,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,9 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern bool HasSubscriptionTables(Oid subid);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6dff23fe6f 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,22 +250,27 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
+
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void process_syncing_relations(XLogRecPtr current_lsn);
+extern void invalidate_syncing_relation_states(Datum arg, int cacheid,
+											   uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -326,15 +335,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 5201280669..358c76e78e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1443,6 +1443,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..7cc8c8cfee
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,185 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+($result, my $stdout, my $stderr) = $node_subscriber->psql(
+	'postgres', "
+        ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES");
+like(
+	$stderr,
+	qr/WARNING: ( [A-Z0-9]+:)? Sequence parameter in remote and local is not same for "public.regress_s4"/,
+	"Refresh publication sequences should throw a warning if the sequence definition is not the same"
+);
+
+
+done_testing();
-- 
2.34.1

#123vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#113)
Re: Logical Replication of sequences

On Tue, 6 Aug 2024 at 09:28, shveta malik <shveta.malik@gmail.com> wrote:

On Tue, Aug 6, 2024 at 8:49 AM shveta malik <shveta.malik@gmail.com> wrote:

Do we need some kind of coordination between table sync and sequence
sync for internally generated sequences? Lets say we have an identity
column with a 'GENERATED ALWAYS' sequence. When the sequence is synced
to subscriber, subscriber can also do an insert to table (extra one)
incrementing the sequence and then when publisher performs an insert,
apply worker will blindly copy that row to sub's table making identity
column's duplicate entries.

CREATE TABLE color ( color_id INT GENERATED ALWAYS AS
IDENTITY,color_name VARCHAR NOT NULL);

Pub: insert into color(color_name) values('red');

Sub: perform sequence refresh and check 'r' state is reached, then do insert:
insert into color(color_name) values('yellow');

Pub: insert into color(color_name) values('blue');

After above, data on Pub: (1, 'red') ;(2, 'blue'),

After above, data on Sub: (1, 'red') ;(2, 'yellow'); (2, 'blue'),

Identity column has duplicate values. Should the apply worker error
out while inserting such a row to the table? Or it is not in the
scope of this project?

This behavior is documented at [1]https://www.postgresql.org/docs/devel/logical-replication-restrictions.html:
Sequence data is not replicated. The data in serial or identity
columns backed by sequences will of course be replicated as part of
the table, but the sequence itself would still show the start value on
the subscriber.

This behavior is because of the above logical replication restriction.
So the behavior looks ok to me and I feel this is not part of the
scope of this project. I have updated this documentation section here
to mention sequences can be updated using ALTER SEQUENCE ... REFRESH
PUBLICATION SEQUENCES at v20240807 version patch attached at [2]/messages/by-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ+6uJn-ZymV=0dbQ@mail.gmail.com.

[1]: https://www.postgresql.org/docs/devel/logical-replication-restrictions.html
[2]: /messages/by-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ+6uJn-ZymV=0dbQ@mail.gmail.com

Regards,
Vignesh

#124vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#110)
Re: Logical Replication of sequences

On Mon, 5 Aug 2024 at 17:28, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Aug 5, 2024 at 2:36 PM vignesh C <vignesh21@gmail.com> wrote:

On Fri, 2 Aug 2024 at 14:24, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Aug 1, 2024 at 9:26 AM shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jul 29, 2024 at 4:17 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for reporting this, these issues are fixed in the attached
v20240730_2 version patch.

I was reviewing the design of patch003, and I have a query. Do we need
to even start an apply worker and create replication slot when
subscription created is for 'sequences only'? IIUC, currently logical
replication apply worker is the one launching sequence-sync worker
whenever needed. I think it should be the launcher doing this job and
thus apply worker may even not be needed for current functionality of
sequence sync?

But that would lead to maintaining all sequence-sync of each
subscription by launcher. Say there are 100 sequences per subscription
and some of them from each subscription are failing due to some
reasons then the launcher will be responsible for ensuring all the
sequences are synced. I think it would be better to handle
per-subscription work by the apply worker.

Going forward when we implement incremental sync of

sequences, then we may have apply worker started but now it is not
needed.

I believe the current method of having the apply worker initiate the
sequence sync worker is advantageous for several reasons:
a) Reduces Launcher Load: This approach prevents overloading the
launcher, which must handle various other subscription requests.
b) Facilitates Incremental Sync: It provides a more straightforward
path to extend support for incremental sequence synchronization.
c) Reuses Existing Code: It leverages the existing tablesync worker
code for starting the tablesync process, avoiding the need to
duplicate code in the launcher.
d) Simplified Code Maintenance: Centralizing sequence synchronization
logic within the apply worker can simplify code maintenance and
updates, as changes will only need to be made in one place rather than
across multiple components.
e) Better Monitoring and Debugging: With sequence synchronization
being handled by the apply worker, you can more effectively monitor
and debug synchronization processes since all related operations are
managed by a single component.

Also, I noticed that even when a publication has no tables, we create
replication slot and start apply worker.

As far as I understand slots and origins are primarily required for
incremental sync. Would it be used only for sequence-sync cases? If
not then we can avoid creating those. I agree that it would add some
complexity to the code with sequence-specific checks, so we can create
a top-up patch for this if required and evaluate its complexity versus
the benefit it produces.

I have added a XXX todo comments in the v20240807 version patch
attached at [1]/messages/by-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ+6uJn-ZymV=0dbQ@mail.gmail.com. I will handle this as a separate patch once the
current patch is stable.
[1]: /messages/by-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ+6uJn-ZymV=0dbQ@mail.gmail.com

Regards,
Vignesh

#125vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#119)
Re: Logical Replication of sequences

On Wed, 7 Aug 2024 at 08:09, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 6, 2024 at 5:13 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 5 Aug 2024 at 18:05, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Aug 5, 2024 at 11:04 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 31 Jul 2024 at 14:39, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jun 10, 2024 at 5:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 12:24, Amul Sul <sulamul@gmail.com> wrote:

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:
[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm missing
something.

We'll require the lsn because the sequence LSN informs the user that
it has been synchronized up to the LSN in pg_subscription_seq. Since
we are not supporting incremental sync, the user will be able to
identify if he should run refresh sequences or not by checking the lsn
of the pg_subscription_seq and the lsn of the sequence(using
pg_sequence_state added) in the publisher.

How the user will know from seq's lsn that he needs to run refresh.
lsn indicates page_lsn and thus the sequence might advance on pub
without changing lsn and thus lsn may look the same on subscriber even
though a sequence-refresh is needed. Am I missing something here?

When a sequence is synchronized to the subscriber, the page LSN of the
sequence from the publisher is also retrieved and stored in
pg_subscriber_rel as shown below:
--- Publisher page lsn
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
--------------------
(0/1510E38,65,1,t)
(1 row)
--- Subscriber stores the publisher's page lsn for the sequence
subscriber=# select * from pg_subscription_rel where srrelid = 16384;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16389 |   16384 | r          | 0/1510E38
(1 row)
If changes are made to the sequence, such as performing many nextvals,
the page LSN will be updated. Currently the sequence values are
prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for
the prefetched values, once the prefetched values are consumed the lsn
will get updated.
For example:
--- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
----------------------
(0/1558CA8,143,22,t)
(1 row)

The user can then compare this updated value with the sequence's LSN
in pg_subscription_rel to determine when to re-synchronize the
sequence.

Thanks for the details. But I was referring to the case where we are
in between pre-fetched values on publisher (say at 25th value), while
on subscriber we are slightly behind (say at 15th value), but page-lsn
will be the same on both. Since the subscriber is behind, a
sequence-refresh is needed on sub, but by looking at lsn (which is
same), one can not say that for sure. Let me know if I have
misunderstood it.

Yes, at present, if the value is within the pre-fetched range, we
cannot distinguish it solely using the page_lsn.

This makes sense to me.

However, the
pg_sequence_state function also provides last_value and log_cnt, which
can be used to handle these specific cases.

BTW, can we document all these steps for users to know when to refresh
the sequences, if not already documented?

This has been documented in the v20240807 version attached at [1]/messages/by-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ+6uJn-ZymV=0dbQ@mail.gmail.com.
[1]: /messages/by-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ+6uJn-ZymV=0dbQ@mail.gmail.com

Regards,
Vignesh

#126Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#122)
1 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh, Here are my v20240807-0003 review comments.

======
1. GENERAL DOCS.

IMO the replication of SEQUENCES is a big enough topic that it
deserves to have its own section in the docs chapter 31 [1]https://www.postgresql.org/docs/current/logical-replication.html.

Some of the create/alter subscription docs content would stay where it
is in, but a new chapter would just tie everything together better. It
could also serve as a better place to describe the other sequence
replication content like:
(a) getting a WARNING for mismatched sequences and how to handle it.
(b) how can the user know when a subscription refresh is required to
(re-)synchronise sequences
(c) pub/sub examples

======
doc/src/sgml/logical-replication.sgml

2. Restrictions

Sequence data is not replicated. The data in serial or identity
columns backed by sequences will of course be replicated as part of
the table, but the sequence itself would still show the start value on
the subscriber. If the subscriber is used as a read-only database,
then this should typically not be a problem. If, however, some kind of
switchover or failover to the subscriber database is intended, then
the sequences would need to be updated to the latest values, either by
executing ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES or by
copying the current data from the publisher (perhaps using pg_dump) or
by determining a sufficiently high value from the tables themselves.

~

2a.
The paragraph starts by saying "Sequence data is not replicated.". It
seems wrong now. Doesn't that need rewording or removing?

~

2b.
Should the info "If, however, some kind of switchover or failover..."
be mentioned in the "Logical Replication Failover" section [2]file:///usr/local/pg_oss/share/doc/postgresql/html/logical-replication-failover.html,
instead of here?

======
doc/src/sgml/ref/alter_subscription.sgml

3.
Sequence values may occasionally become out of sync due to updates in
the publisher. To verify this, compare the
pg_subscription_rel.srsublsn on the subscriber with the page_lsn
obtained from the pg_sequence_state for the sequence on the publisher.
If the sequence is still using prefetched values, the page_lsn will
not be updated. In such cases, you will need to directly compare the
sequences and execute REFRESH PUBLICATION SEQUENCES if required.

~

3a.
This whole paragraph may be better put in the new chapter that was
suggested earlier in review comment #1.

~

3b.
Is it only "Occasionally"? I expected subscriber-side sequences could
become stale quite often.

~

3c.
Is this advice very useful? It's saying if the LSN is different then
the sequence is out of date, but if the LSN is not different then you
cannot tell. Why not ignore LSN altogether and just advise the user to
directly compare the sequences in the first place?

======

Also, there are more minor suggestions in the attached nitpicks diff.

======
[1]: https://www.postgresql.org/docs/current/logical-replication.html
[2]: file:///usr/local/pg_oss/share/doc/postgresql/html/logical-replication-failover.html

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240808_SEQ_0003.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240808_SEQ_0003.txtDownload
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 6b320b1..bb6aa8e 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -726,7 +726,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
 	/*
-	 * XXX todo: If the subscription is for a sequence-only publication,
+	 * XXX: If the subscription is for a sequence-only publication,
 	 * creating this origin is unnecessary at this point. It can be created
 	 * later during the ALTER SUBSCRIPTION ... REFRESH command, if the
 	 * publication is updated to include tables or tables in schemas.
@@ -756,7 +756,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
-			bool	   hastables = false;
+			bool		has_tables;
 			List	   *relations;
 			char		table_state;
 
@@ -771,16 +771,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables and
+			 * sequences from the publisher.
 			 */
 			relations = fetch_table_list(wrconn, publications);
-			if (relations != NIL)
-				hastables = true;
-
-			/* Include the sequence list from publisher. */
+			has_tables = relations != NIL;
 			relations = list_concat(relations,
 									fetch_sequence_list(wrconn, publications));
+
 			foreach_ptr(RangeVar, rv, relations)
 			{
 				Oid			relid;
@@ -800,7 +798,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
 			 *
-			 * XXX todo: If the subscription is for a sequence-only publication,
+			 * XXX: If the subscription is for a sequence-only publication,
 			 * creating this slot is not necessary at the moment. It can be
 			 * created during the ALTER SUBSCRIPTION ... REFRESH command if the
 			 * publication is updated to include tables or tables in schema.
@@ -827,7 +825,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && hastables)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -2475,7 +2473,7 @@ fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
 						   "      LATERAL pg_get_publication_sequences(p.pubname::text) gps(relid), pg_class c\n"
 						   "      JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						   "      JOIN pg_sequence s ON c.oid = s.seqrelid\n"
-						   "WHERE c.oid = gps.relid AND p.pubname IN (\n");
+						   "WHERE c.oid = gps.relid AND p.pubname IN (");
 	get_publications_str(publications, &cmd, true);
 	appendStringInfoChar(&cmd, ')');
 
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 2935c53..a79c8a3 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -18,8 +18,8 @@
  * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
  *
- * Apply worker will periodically check if there are any sequences in INIT
- * state and start a sequencesync worker.
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
  *
  * The sequencesync worker retrieves the sequences to be synchronized from the
  * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
@@ -35,16 +35,18 @@
  * (100) sequences are synchronized per transaction. The locks on the sequence
  * relation will be periodically released at each transaction commit.
  *
- * An alternative design was considered where the launcher process would
+ * XXX: An alternative design was considered where the launcher process would
  * periodically check for sequences that need syncing and then start the
- * sequence sync worker. However, the approach of having the apply worker
- * manage the sequence sync worker was chosen for the following reasons: a) It
- * avoids overloading the launcher, which handles various other subscription
- * requests. b) It offers a more straightforward path for extending support for
- * incremental sequence synchronization. c) It utilizes the existing tablesync
- * worker code to start the sequencesync process, thus preventing code
- * duplication in the launcher. d) It simplifies code maintenance by
- * consolidating changes to a single location rather than multiple components.
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
  *-------------------------------------------------------------------------
  */
 
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 5800f21..011c579 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1700,7 +1700,7 @@ copy_table_done:
 static bool
 FetchTableStates(void)
 {
-	static bool has_subrels = false;
+	static bool has_subtables = false;
 	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_TABLE_STATE_VALID)
@@ -1752,7 +1752,7 @@ FetchTableStates(void)
 		 * if table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
-		has_subrels = (table_states_not_ready != NIL) ||
+		has_subtables = (table_states_not_ready != NIL) ||
 			HasSubscriptionTables(MySubscription->oid);
 
 		/*
@@ -1772,7 +1772,7 @@ FetchTableStates(void)
 		pgstat_report_stat(true);
 	}
 
-	return has_subrels;
+	return has_subtables;
 }
 
 /*
#127Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#120)
Re: Logical Replication of sequences

On Wed, Aug 7, 2024 at 10:12 AM Peter Smith <smithpb2250@gmail.com> wrote:

This is mostly a repeat of my previous mail from a while ago [1] but
includes some corrections, answers, and more examples. I'm going to
try to persuade one last time because the current patch is becoming
stable, so I wanted to revisit this syntax proposal before it gets too
late to change anything.

If there is some problem with the proposed idea please let me know
because I can see only the advantages and no disadvantages of doing it
this way.

~~~

The current patchset offers two forms of subscription refresh:
1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option
[= value] [, ... ] ) ]
2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES

Since 'copy_data' is the only supported refresh_option, really it is more like:
1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( copy_data [=
true|false] ) ]
2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES

~~~

I proposed previously that instead of having 2 commands for refreshing
subscriptions we should have a single refresh command:

ALTER SUBSCRIPTION name REFRESH PUBLICATION [TABLES|SEQUENCES] [ WITH
( copy_data [= true|false] ) ]

Why?

- IMO it is less confusing than having 2 commands that both refresh
sequences in slightly different ways

- It is more flexible because apart from refreshing everything, a user
can choose to refresh only tables or only sequences if desired; IMO
more flexibility is always good.

- There is no loss of functionality from the current implementation
AFAICT. You can still say "ALTER SUBSCRIPTION sub REFRESH PUBLICATION
SEQUENCES" exactly the same as the patchset allows.

~~~

So, to continue this proposal, let the meaning of 'copy_data' for
SEQUENCES be as follows:

- when copy_data == false: it means don't copy data (i.e. don't
synchronize anything). Add/remove sequences from pg_subscriber_rel as
needed.

- when copy_data == true: it means to copy data (i.e. synchronize) for
all sequences. Add/remove sequences from pg_subscriber_rel as needed)

I find overloading the copy_data option more confusing than adding a
new variant for REFRESH. To make it clear, we can even think of
extending the command as ALTER SUBSCRIPTION name REFRESH PUBLICATION
ALL SEQUENCES or something like that. I don't know where there is a
need or not but one can imagine extending it as ALTER SUBSCRIPTION
name REFRESH PUBLICATION SEQUENCES [<seq_name_1>, <seq_name_2>, ..].
This will allow to selectively refresh the sequences.

--
With Regards,
Amit Kapila.

#128Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#127)
Re: Logical Replication of sequences

On Thu, Aug 8, 2024 at 1:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Aug 7, 2024 at 10:12 AM Peter Smith <smithpb2250@gmail.com> wrote:

This is mostly a repeat of my previous mail from a while ago [1] but
includes some corrections, answers, and more examples. I'm going to
try to persuade one last time because the current patch is becoming
stable, so I wanted to revisit this syntax proposal before it gets too
late to change anything.

If there is some problem with the proposed idea please let me know
because I can see only the advantages and no disadvantages of doing it
this way.

~~~

The current patchset offers two forms of subscription refresh:
1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option
[= value] [, ... ] ) ]
2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES

Since 'copy_data' is the only supported refresh_option, really it is more like:
1. ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( copy_data [=
true|false] ) ]
2. ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES

~~~

I proposed previously that instead of having 2 commands for refreshing
subscriptions we should have a single refresh command:

ALTER SUBSCRIPTION name REFRESH PUBLICATION [TABLES|SEQUENCES] [ WITH
( copy_data [= true|false] ) ]

Why?

- IMO it is less confusing than having 2 commands that both refresh
sequences in slightly different ways

- It is more flexible because apart from refreshing everything, a user
can choose to refresh only tables or only sequences if desired; IMO
more flexibility is always good.

- There is no loss of functionality from the current implementation
AFAICT. You can still say "ALTER SUBSCRIPTION sub REFRESH PUBLICATION
SEQUENCES" exactly the same as the patchset allows.

~~~

So, to continue this proposal, let the meaning of 'copy_data' for
SEQUENCES be as follows:

- when copy_data == false: it means don't copy data (i.e. don't
synchronize anything). Add/remove sequences from pg_subscriber_rel as
needed.

- when copy_data == true: it means to copy data (i.e. synchronize) for
all sequences. Add/remove sequences from pg_subscriber_rel as needed)

I find overloading the copy_data option more confusing than adding a
new variant for REFRESH. To make it clear, we can even think of
extending the command as ALTER SUBSCRIPTION name REFRESH PUBLICATION
ALL SEQUENCES or something like that. I don't know where there is a
need or not but one can imagine extending it as ALTER SUBSCRIPTION
name REFRESH PUBLICATION SEQUENCES [<seq_name_1>, <seq_name_2>, ..].
This will allow to selectively refresh the sequences.

But, I haven't invented a new overloading for "copy_data" option
(meaning "synchronize") for sequences. The current patchset already
interprets copy_data exactly this way.

For example, below are patch 0003 results:

ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION WITH (copy_data=false)
- this will add/remove new sequences in pg_subscription_rel, but it
will *not* synchronize the new sequence

ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION WITH (copy_data=true)
- this will add/remove new sequences in pg_subscription_rel, and it
*will* synchronize the new sequence

~

I only proposed that copy_data should apply to *all* sequences, not
just new ones.

======
Kind Regards.
Peter Smith.
Fujitsu Australia.

#129shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#122)
Re: Logical Replication of sequences

On Wed, Aug 7, 2024 at 1:45 PM vignesh C <vignesh21@gmail.com> wrote:

The remaining comments have been addressed, and the changes are
included in the attached v20240807 version patch.

Thanks for addressing the comment. Please find few comments for v20240807 :

patch002:
1)
create_publication.sgml:

--I think it will be good to add another example for both tables and sequences:
CREATE PUBLICATION all_sequences FOR ALL TABLES, SEQUENCES;
I was trying FOR ALL TABLES, FOR ALL SEQUENCES; but I think it is not
the correct way, so good to have the correct way mentioned in one
example.

patch003:
2)

* The page_lsn allows the user to determine if the sequence has been updated
* since the last synchronization with the subscriber. This is done by
* comparing the current page_lsn with the value stored in pg_subscription_rel
* from the last synchronization.
*/
Datum
pg_sequence_state(PG_FUNCTION_ARGS)

--This information is still incomplete. Maybe we should mention the
other attribute name as well which helps to determine this.

3)
Shall process_syncing_sequences_for_apply() be moved to sequencesync.c

4)
Would it be better to give a single warning for all unequal sequences
(comma separated list of sequenec names?)

postgres=# create subscription sub1 connection '....' publication pub1;
WARNING: Sequence parameter in remote and local is not same for "public.myseq2"
HINT: Alter/Re-create the sequence using the same parameter as in remote.
WARNING: Sequence parameter in remote and local is not same for "public.myseq0"
HINT: Alter/Re-create the sequence using the same parameter as in remote.
WARNING: Sequence parameter in remote and local is not same for "public.myseq4"
HINT: Alter/Re-create the sequence using the same parameter as in remote.

5)
IIUC, sequencesync_failure_time is changed by multiple processes.
Seq-sync worker sets it before exiting on failure, while apply worker
resets it. Also, the applied worker reads it at a few places. Shall it
be accessed using LogicalRepWorkerLock?

6)
process_syncing_sequences_for_apply():

--I feel MyLogicalRepWorker->sequencesync_failure_time should be reset
to 0 after we are sure that logicalrep_worker_launch() has launched
the worker without any error. But not sure what could be the clean way
to do it? If we move it after logicalrep_worker_launch() call, there
are chances that seq-sync worker has started and failed already and
has set this failure time which will then be mistakenly reset by apply
worker. Also moving it inside logicalrep_worker_launch() does not seem
a good way.

7)
sequencesync.c
PostgreSQL logical replication: initial sequence synchronization

--Since it is called by REFRESH also. So shall we remove 'initial'?

8)
/*
* Process any tables that are being synchronized in parallel and
* any newly added relations.
*/
process_syncing_relations(last_received);

--I did not understand the comment very well. Why are we using 2
separate words 'tables' and 'relations'? I feel we should have
mentioned sequences too in the comment.

9)
logical-replication.sgml: Sequence data is not replicated.

--I feel we should rephrase this line now to indicate that it could be
replicated by the new options.

thanks
Shveta

#130Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#128)
Re: Logical Replication of sequences

On Thu, Aug 8, 2024 at 11:09 AM Peter Smith <smithpb2250@gmail.com> wrote:

But, I haven't invented a new overloading for "copy_data" option
(meaning "synchronize") for sequences. The current patchset already
interprets copy_data exactly this way.

For example, below are patch 0003 results:

ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION WITH (copy_data=false)
- this will add/remove new sequences in pg_subscription_rel, but it
will *not* synchronize the new sequence

ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION WITH (copy_data=true)
- this will add/remove new sequences in pg_subscription_rel, and it
*will* synchronize the new sequence

~

I only proposed that copy_data should apply to *all* sequences, not
just new ones.

I don't like this difference because for tables, it would *not*
consider syncing already the existing tables whereas for sequences it
would consider syncing existing ones. We previously discussed adding a
new option like copy_all_sequences instead of adding a new variant of
command but that has its own set of problems, so we agreed to proceed
with a new variant. See [1]/messages/by-id/CAD21AoAAszSeHNRha4HND8b9XyzNrx6jbA7t3Mbe+fH4hNRj9A@mail.gmail.com ( ...Good point. And I understood that the
REFRESH PUBLICATION SEQUENCES command would be helpful when users want
to synchronize sequences between two nodes before upgrading.).

Having said that, if others also prefer to use copy_data for this
purpose with a different meaning of this option w.r.t tables and
sequences then we can still consider it.

[1]: /messages/by-id/CAD21AoAAszSeHNRha4HND8b9XyzNrx6jbA7t3Mbe+fH4hNRj9A@mail.gmail.com

--
With Regards,
Amit Kapila.

#131vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#126)
4 attachment(s)
Re: Logical Replication of sequences

On Thu, 8 Aug 2024 at 08:30, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are my v20240807-0003 review comments.

2a.
The paragraph starts by saying "Sequence data is not replicated.". It
seems wrong now. Doesn't that need rewording or removing?

Changed it to incremental sequence changes.

~

2b.
Should the info "If, however, some kind of switchover or failover..."
be mentioned in the "Logical Replication Failover" section [2],
instead of here?

I think mentioning this here is appropriate. The other section focuses
more on how logical replication can proceed with a new primary. Once
the logical replication setup is complete, sequences can be refreshed
at any time.

Rest of the comments are fixed, the attached v20240808 version patch
has the changes for the same.

Regards,
Vignesh

Attachments:

v20240808-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240808-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 3d6756807eb744d28d32562af207d9189ff95c93 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240808 1/4] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 0f7154b76a..ca5be43283 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19582,6 +19582,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d36f6001bb..7997b841cb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240808-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240808-0003-Enhance-sequence-synchronization-during-su.patchDownload
From 2a1964800b6b5fdae4d89b600e0574ed94b03a66 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:13:02 +0530
Subject: [PATCH v20240808 3/4] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  83 +++-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 326 +++++++++++---
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/launcher.c    |  70 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 408 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 244 ++++++++---
 src/backend/replication/logical/worker.c      |  74 +++-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   6 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  31 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 185 ++++++++
 26 files changed, 1372 insertions(+), 193 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..4e2f96058f 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,18 +460,19 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,35 +511,41 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached 'READY' state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in 'init' state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
 	HeapTuple	tup;
 	int			nkeys = 0;
-	ScanKeyData skey[2];
+	ScanKeyData skey;
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
-	ScanKeyInit(&skey[nkeys++],
+	ScanKeyInit(&skey,
 				Anum_pg_subscription_rel_srsubid,
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
-		ScanKeyInit(&skey[nkeys++],
-					Anum_pg_subscription_rel_srsubstate,
-					BTEqualStrategyNumber, F_CHARNE,
-					CharGetDatum(SUBREL_STATE_READY));
-
 	scan = systable_beginscan(rel, InvalidOid, false,
-							  NULL, nkeys, skey);
+							  NULL, nkeys, &skey);
 
 	while (HeapTupleIsValid(tup = systable_getnext(scan)))
 	{
@@ -532,6 +556,27 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE)
+		{
+			/* Skip sequences if they were not requested */
+			if (!get_sequences)
+				continue;
+
+			/* Skip all non-init sequences unless all_states was requested */
+			if (!all_states && (subrel->srsubstate != SUBREL_STATE_INIT))
+				continue;
+		}
+		else
+		{
+			/* Skip tables if they were not requested */
+			if (!get_tables)
+				continue;
+
+			/* Skip all ready tables unless all_states was requested */
+			if (!all_states && (subrel->srsubstate == SUBREL_STATE_READY))
+				continue;
+		}
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..322eb71965 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1898,6 +1900,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
  *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
+ *
+ * The page_lsn allows the user to determine if the sequence has been updated
+ * since the last synchronization with the subscriber. This is done by
+ * comparing the current page_lsn with the value stored in pg_subscription_rel
+ * from the last synchronization.
  */
 Datum
 pg_sequence_state(PG_FUNCTION_ARGS)
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..bb6aa8eae2 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication,
+	 * creating this origin is unnecessary at this point. It can be created
+	 * later during the ALTER SUBSCRIPTION ... REFRESH command, if the
+	 * publication is updated to include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables and
+			 * sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +797,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is not necessary at the moment. It can be
+			 * created during the ALTER SUBSCRIPTION ... REFRESH command if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +825,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +864,35 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * If 'copy_data' parameter is true, the function will set the state
+ * to "init"; otherwise, it will set the state to "ready".
+ *
+ * When 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * If 'resync_all_sequences' is true, mark all objects with "init" state
+ * for re-synchronization; otherwise, only update the newly added tables and
+ * sequences based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +910,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+	/* resync_all_sequences cannot be specified with refresh_tables */
+	Assert(!(resync_all_sequences && refresh_tables));
+
+	/* resync_all_sequences cannot be specified with copy_data as false */
+	Assert(!(resync_all_sequences && !copy_data));
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +935,16 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +963,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +988,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,7 +1005,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 rv->schemaname, rv->relname, sub->name)));
 			}
 		}
@@ -968,11 +1023,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(LOG,
+							(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											 get_namespace_name(get_rel_namespace(relid)),
+											 get_rel_name(relid),
+											 sub->name)));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,38 +1069,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
@@ -1039,6 +1124,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1528,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1569,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1588,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1644,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1886,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1913,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2271,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2449,109 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[8] = {TEXTOID, TEXTOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT n.nspname, c.relname, s.seqtypid,\n"
+						   "s.seqmin, s.seqmax, s.seqstart, s.seqincrement, s.seqcycle\n"
+						   "FROM pg_publication p,\n"
+						   "      LATERAL pg_get_publication_sequences(p.pubname::text) gps(relid), pg_class c\n"
+						   "      JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						   "      JOIN pg_sequence s ON c.oid = s.seqrelid\n"
+						   "WHERE c.oid = gps.relid AND p.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 8, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		Oid			seqtypid;
+		int64		seqmin;
+		int64		seqmax;
+		int64		seqstart;
+		int64		seqincrement;
+		bool		seqcycle;
+		bool		isnull;
+		RangeVar   *rv;
+		Oid			relid;
+		HeapTuple	tup;
+		Form_pg_sequence seqform;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+		seqtypid = DatumGetObjectId(slot_getattr(slot, 3, &isnull));
+		Assert(!isnull);
+		seqmin = DatumGetInt64(slot_getattr(slot, 4, &isnull));
+		Assert(!isnull);
+		seqmax = DatumGetInt64(slot_getattr(slot, 5, &isnull));
+		Assert(!isnull);
+		seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+		Assert(!isnull);
+		seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+		Assert(!isnull);
+		seqcycle = DatumGetBool(slot_getattr(slot, 8, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+		/* Get the local sequence */
+		tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(tup))
+			elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+				 get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid));
+
+		seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+		if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||
+			seqform->seqmax != seqmax || seqform->seqstart != seqstart ||
+			seqform->seqincrement != seqincrement ||
+			seqform->seqcycle != seqcycle)
+			ereport(WARNING,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("Sequence parameter in remote and local is not same for \"%s.%s\"",
+						   get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid)),
+					errhint("Alter/Re-create the sequence using the same parameter as in remote."));
+
+		ReleaseSysCache(tup);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9b3cad1cac..28b772df32 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10835,11 +10835,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..f8dd93a83a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..cbe7c814ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..a79c8a3c3d
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,408 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: initial sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.  The page LSN can
+ * later be compared with the pg_sequence_state page_lsn to determine if the
+ * sequence has changed since the last synchronization.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieve sequence data (last_value, log_cnt, page_lsn and is_called)
+ * from the remote node.
+ *
+ * The sequence last_value will be returned directly, while
+ * log_cnt, is_called and page_lsn will be returned via the output
+ * parameters log_cnt, is_called and page_lsn, respectively.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
+						   char *relname, int64 *log_cnt, bool *is_called,
+						   XLogRecPtr *page_lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[4] = {INT8OID, INT8OID, BOOLOID, LSNOID};
+	int64		last_value = (Datum) 0;
+	bool		isnull;
+
+	initStringInfo(&cmd);
+
+	appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called, page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 4, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return last_value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	seq_last_value = fetch_remote_sequence_data(conn, remoteid, nspname,
+												relname, &seq_log_cnt, &seq_is_called,
+												&seq_page_lsn);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..011c579f32 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -130,19 +130,22 @@ typedef enum
 	SYNC_TABLE_STATE_VALID,
 } SyncingTablesState;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+static SyncingTablesState relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+static List *sequence_states_not_ready = NIL;
+static bool FetchTableStates(void);
 
 static StringInfo copybuf = NULL;
 
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -157,15 +160,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -205,7 +217,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -252,7 +264,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -278,9 +290,9 @@ wait_for_worker_state_change(char expected_state)
  * Callback from syscache invalidation.
  */
 void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 }
 
 /*
@@ -387,7 +399,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -429,9 +441,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -464,6 +473,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -477,11 +494,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -518,7 +530,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -661,10 +674,108 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply()
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequence sync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync worker and/or sequencesync worker for the newly added
+ * relations.
  */
 void
-process_syncing_tables(XLogRecPtr current_lsn)
+process_syncing_relations(XLogRecPtr current_lsn)
 {
 	switch (MyLogicalRepWorker->type)
 	{
@@ -682,7 +793,20 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchTableStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -1320,7 +1444,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1564,39 +1688,48 @@ copy_table_done:
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * Copy tables that are not READY state into table_states_not_ready, and sequences
+ * that have INIT state into sequence_states_not_ready. The pg_subscription_rel
+ * catalog is shared by tables and sequences. Changes to either sequences or
+ * tables can affect the validity of relation states, so we update both
+ * table_states_not_ready and sequence_states_not_ready simultaneously
+ * to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 static bool
-FetchTableStates(bool *started_tx)
+FetchTableStates(void)
 {
-	static bool has_subrels = false;
-
-	*started_tx = false;
+	static bool has_subtables = false;
+	bool		started_tx = false;
 
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_TABLE_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/*
+		 * Fetch tables that are in non-ready state, and sequences that are in
+		 * init state.
+		 */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -1604,19 +1737,23 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
 		/*
 		 * Does the subscription have tables?
 		 *
-		 * If there were not-READY relations found then we know it does. But
+		 * If there were not-READY tables found then we know it does. But
 		 * if table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
 
 		/*
 		 * If the subscription relation cache has been invalidated since we
@@ -1625,11 +1762,17 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_TABLE_STATE_VALID;
 	}
 
-	return has_subrels;
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	return has_subtables;
 }
 
 /*
@@ -1709,7 +1852,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1717,7 +1860,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1731,17 +1874,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchTableStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6dc54c7283..2e84b24617 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,8 +1030,11 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1137,8 +1145,11 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1193,8 +1204,11 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1254,8 +1268,11 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1381,8 +1398,11 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2223,8 +2243,11 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3621,8 +3644,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
+			process_syncing_relations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4531,8 +4557,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4611,6 +4637,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4619,14 +4649,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4656,8 +4689,11 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index c0a52cdcc3..c15e72802f 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3236,7 +3236,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of relation synchronization workers per subscription."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 7997b841cb..899f0299b8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12007,6 +12007,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,9 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern bool HasSubscriptionTables(Oid subid);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6dff23fe6f 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,22 +250,27 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
+
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void process_syncing_relations(XLogRecPtr current_lsn);
+extern void invalidate_syncing_relation_states(Datum arg, int cacheid,
+											   uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -326,15 +335,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 5201280669..358c76e78e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1443,6 +1443,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..7cc8c8cfee
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,185 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+($result, my $stdout, my $stderr) = $node_subscriber->psql(
+	'postgres', "
+        ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES");
+like(
+	$stderr,
+	qr/WARNING: ( [A-Z0-9]+:)? Sequence parameter in remote and local is not same for "public.regress_s4"/,
+	"Refresh publication sequences should throw a warning if the sequence definition is not the same"
+);
+
+
+done_testing();
-- 
2.34.1

v20240808-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240808-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 35fd4d3672da7cef1001b3b79b47895601fb3e78 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240808 2/4] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  37 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 697 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..783874fb75 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,13 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a043fd4c66..9b3cad1cac 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10555,7 +10561,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10575,13 +10586,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10693,6 +10704,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19406,6 +19439,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..0ce48da963 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240808-0004-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240808-0004-Documentation-for-sequence-synchronization.patchDownload
From 48a274ea5dc7412630a6921339e1474da6cc2d26 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240808 4/4] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 222 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 352 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..28ca21a772 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a1a1d58a43..733570dd99 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5230,10 +5230,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..d0fdcdee05 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,200 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Sequences</title>
+
+  <para>
+   Sequences can be synchronized between a publisher and a subscriber using
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+   to initially synchronize sequences, <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+   synchronize any newly added sequences and <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+   to re-synchronize all sequences. A new sequence synchronization worker will
+   be started to synchronize the sequences after executing the above commands
+   and will exit once the sequences are synchronized.
+  </para>
+
+  <para>
+   Sequence synchronization worker will be used from
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequences-definition-differ-publisher-subscriber">
+   <title>Differences in Sequence Definitions Between Publisher and Subscriber</title>
+   <para>
+    If there are differences in sequence definitions between the publisher and
+    subscriber, a WARNING is logged. To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+    It is advisable not to change sequence definitions on either the publisher
+    or the subscriber until synchronization is complete and the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsubstate</structfield>
+    reaches <literal>r</literal> (ready) state.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Handling Sequences Out of Sync Between Publisher and Subscriber</title>
+   <para>
+    Sequence values may frequently become out of sync due to updates on the
+    publisher. To verify this, compare the sequences values between the
+    publisher and subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples: Synchronizing Sequences Between Publisher and Subscriber</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update  the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create publications for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Create subscriptions for the publications.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence value is synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequneces at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber using ALTER SUBSCRIPTION REFRESH PUBLCIATIN SEQUENCES:
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+
+ <sect2 id="sequence-synchronization-caveats">
+   <title>Caveats</title>
+
+   <para>
+    At this writing, there are couple of limitations of the sequence
+    replication.  These will probably be fixed in future releases:
+
+    <itemizedlist>
+     <listitem>
+      <para>
+       Changes to sequence definitions during the execution of
+       <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+       <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+       may not be detected, potentially leading to inconsistent values. To avoid
+       this, refrain from modifying sequence definitions on both publisher and
+       subscriber until synchronization is complete and the
+       <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsubstate</structfield>
+       reaches <literal>r</literal> (ready) state.
+      </para>
+     </listitem>
+
+     <listitem>
+      <para>
+       Incremental synchronization of sequences is not supported.
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1677,16 +1871,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes is not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -1999,8 +2195,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2014,7 +2210,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..7d1399b924 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequences-definition-differ-publisher-subscriber"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out of sync sequences.
+     </para>
+     <para>
+      See <xref linkend="sequences-definition-differ-publisher-subscriber"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..d39976ca91 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequences-definition-differ-publisher-subscriber"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a0b692bf1e..8373ebc5b0 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2185,6 +2190,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

#132vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#121)
Re: Logical Replication of sequences

On Wed, 7 Aug 2024 at 10:27, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Aug 5, 2024 at 10:26 AM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 1 Aug 2024 at 04:25, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

I noticed that when replicating sequences (using the latest patches
0730_2*) the subscriber-side checks the *existence* of the sequence,
but apparently it is not checking other sequence attributes.

For example, consider:

Publisher: "CREATE SEQUENCE s1 START 1 INCREMENT 2;" should be a
sequence of only odd numbers.
Subscriber: "CREATE SEQUENCE s1 START 2 INCREMENT 2;" should be a
sequence of only even numbers.

Because the names match, currently the patch allows replication of the
s1 sequence. I think that might lead to unexpected results on the
subscriber. IMO it might be safer to report ERROR unless the sequences
match properly (i.e. not just a name check).

Below is a demonstration the problem:

==========
Publisher:
==========

(publisher sequence is odd numbers)

test_pub=# create sequence s1 start 1 increment 2;
CREATE SEQUENCE
test_pub=# select * from nextval('s1');
nextval
---------
1
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
3
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
5
(1 row)

test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
CREATE PUBLICATION
test_pub=#

==========
Subscriber:
==========

(subscriber sequence is even numbers)

test_sub=# create sequence s1 start 2 increment 2;
CREATE SEQUENCE
test_sub=# SELECT * FROM nextval('s1');
nextval
---------
2
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
4
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
6
(1 row)

test_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'
PUBLICATION pub1;
2024-08-01 08:43:04.198 AEST [24325] WARNING: subscriptions created
by regression test cases should have names starting with "regress_"
WARNING: subscriptions created by regression test cases should have
names starting with "regress_"
NOTICE: created replication slot "sub1" on publisher
CREATE SUBSCRIPTION
test_sub=# 2024-08-01 08:43:04.294 AEST [26240] LOG: logical
replication apply worker for subscription "sub1" has started
2024-08-01 08:43:04.309 AEST [26244] LOG: logical replication
sequence synchronization worker for subscription "sub1" has started
2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication
synchronization for subscription "sub1", sequence "s1" has finished
2024-08-01 08:43:04.323 AEST [26244] LOG: logical replication
sequence synchronization worker for subscription "sub1" has finished

(after the CREATE SUBSCRIPTION we are getting replicated odd values
from the publisher, even though the subscriber side sequence was
supposed to be even numbers)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
7
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
9
(1 row)

test_sub=# SELECT * FROM nextval('s1');
nextval
---------
11
(1 row)

(Looking at the description you would expect odd values for this
sequence to be impossible)

I see that for such even sequences, user can still do 'setval' to a
odd number and then nextval will keep on returning odd value.

postgres=# SELECT nextval('s1');
6

postgres=SELECT setval('s1', 43);
43

postgres=# SELECT nextval('s1');
45

test_sub=# \dS+ s1
Sequence "public.s1"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 2 | 1 | 9223372036854775807 | 2 | no | 1

Even if we check the sequence definition during the CREATE
SUBSCRIPTION/ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES commands, there's still
a chance that the sequence definition might change after the command
has been executed. Currently, there's no mechanism to lock a sequence,
and we also permit replication of table data even if the table
structures differ, such as mismatched data types like int and
smallint. I have modified it to log a warning to inform users that the
sequence options on the publisher and subscriber are not the same and
advise them to ensure that the sequence definitions are consistent
between both.
The v20240805 version patch attached at [1] has the changes for the same.
[1] - /messages/by-id/CALDaNm1Y_ot-jFRfmtwDuwmFrgSSYHjVuy28RspSopTtwzXy8w@mail.gmail.com

The behavior for applying is no different from setval. Having said
that, I agree that sequence definition can change even after the
subscription creation, but earlier we were not syncing sequences and
thus the value of a particular sequence was going to remain in the
range/pattern defined by its attributes unless user sets it manually
using setval. But now, it is being changed in the background without
user's knowledge.
The table case is different. In case of table replication, if we have
CHECK constraint or say primary-key etc, then the value which violates
these constraints will never be inserted to a table even during
replication on sub. For sequences, parameters (MIN,MAX, START,
INCREMENT) can be considered similar to check-constraints, the only
difference is during apply, we are still overriding these and copying
pub's value. May be such inconsistencies detection can be targeted
later in next project. But for the time being, it will be good to add
a 'caveat' section in doc mentioning all such cases. The scope of this
project should be clearly documented.

I have added a Caveats section and mentioned it.
The changes for the same are available at v20240808 version attached at [1]/messages/by-id/CALDaNm1QQK_Pgx35LrJGuRxBzzYSO8rm1YGJF4w8hYc3Gm+5NQ@mail.gmail.com.
[1]: /messages/by-id/CALDaNm1QQK_Pgx35LrJGuRxBzzYSO8rm1YGJF4w8hYc3Gm+5NQ@mail.gmail.com

Regards,
Vignesh

#133Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#131)
1 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh, I reviewed the latest v20240808-0003 patch.

Attached are my minor change suggestions.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240809_seq_0003.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240809_seq_0003.txtDownload
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 4e2f960..a77e810 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -517,9 +517,9 @@ HasSubscriptionTables(Oid subid)
  *
  * all_states:
  * If getting tables, if all_states is true get all tables, otherwise
- * only get tables that have not reached 'READY' state.
+ * only get tables that have not reached READY state.
  * If getting sequences, if all_states is true get all sequences,
- * otherwise only get sequences that are in 'init' state.
+ * otherwise only get sequences that are in INIT state.
  *
  * The returned list is palloc'ed in the current memory context.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index bb6aa8e..2833379 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -868,10 +868,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
  * Update the subscription to refresh both the publication and the publication
  * objects associated with the subscription.
  *
- * If 'copy_data' parameter is true, the function will set the state
- * to "init"; otherwise, it will set the state to "ready".
+ * Parameters:
  *
- * When 'validate_publications' is provided with a publication list, the
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
  * function checks that the specified publications exist on the publisher.
  *
  * If 'refresh_tables' is true, update the subscription by adding or removing
@@ -882,9 +884,22 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
  * sequences that have been added or removed since the last subscription
  * creation or publication refresh.
  *
- * If 'resync_all_sequences' is true, mark all objects with "init" state
- * for re-synchronization; otherwise, only update the newly added tables and
- * sequences based on the copy_data parameter.
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
  */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
@@ -910,11 +925,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
-	/* resync_all_sequences cannot be specified with refresh_tables */
-	Assert(!(resync_all_sequences && refresh_tables));
-
-	/* resync_all_sequences cannot be specified with copy_data as false */
-	Assert(!(resync_all_sequences && !copy_data));
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
 
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
#134shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#125)
Re: Logical Replication of sequences

On Wed, Aug 7, 2024 at 2:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 7 Aug 2024 at 08:09, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 6, 2024 at 5:13 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 5 Aug 2024 at 18:05, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Aug 5, 2024 at 11:04 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 31 Jul 2024 at 14:39, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jun 10, 2024 at 5:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 12:24, Amul Sul <sulamul@gmail.com> wrote:

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:
[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm missing
something.

We'll require the lsn because the sequence LSN informs the user that
it has been synchronized up to the LSN in pg_subscription_seq. Since
we are not supporting incremental sync, the user will be able to
identify if he should run refresh sequences or not by checking the lsn
of the pg_subscription_seq and the lsn of the sequence(using
pg_sequence_state added) in the publisher.

How the user will know from seq's lsn that he needs to run refresh.
lsn indicates page_lsn and thus the sequence might advance on pub
without changing lsn and thus lsn may look the same on subscriber even
though a sequence-refresh is needed. Am I missing something here?

When a sequence is synchronized to the subscriber, the page LSN of the
sequence from the publisher is also retrieved and stored in
pg_subscriber_rel as shown below:
--- Publisher page lsn
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
--------------------
(0/1510E38,65,1,t)
(1 row)
--- Subscriber stores the publisher's page lsn for the sequence
subscriber=# select * from pg_subscription_rel where srrelid = 16384;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16389 |   16384 | r          | 0/1510E38
(1 row)
If changes are made to the sequence, such as performing many nextvals,
the page LSN will be updated. Currently the sequence values are
prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for
the prefetched values, once the prefetched values are consumed the lsn
will get updated.
For example:
--- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
----------------------
(0/1558CA8,143,22,t)
(1 row)

The user can then compare this updated value with the sequence's LSN
in pg_subscription_rel to determine when to re-synchronize the
sequence.

Thanks for the details. But I was referring to the case where we are
in between pre-fetched values on publisher (say at 25th value), while
on subscriber we are slightly behind (say at 15th value), but page-lsn
will be the same on both. Since the subscriber is behind, a
sequence-refresh is needed on sub, but by looking at lsn (which is
same), one can not say that for sure. Let me know if I have
misunderstood it.

Yes, at present, if the value is within the pre-fetched range, we
cannot distinguish it solely using the page_lsn.

This makes sense to me.

However, the
pg_sequence_state function also provides last_value and log_cnt, which
can be used to handle these specific cases.

BTW, can we document all these steps for users to know when to refresh
the sequences, if not already documented?

This has been documented in the v20240807 version attached at [1].
[1] - /messages/by-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ+6uJn-ZymV=0dbQ@mail.gmail.com

Vignesh, I looked at the patch dated 240808, but I could not find
these steps. Are you referring to the section ' Examples:
Synchronizing Sequences Between Publisher and Subscriber' in doc
patch004? If not, please point me to the concerned section.

thanks
Shveta

#135Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#131)
1 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh, here are my review comments for the sequences docs patch
v20240808-0004.

======
doc/src/sgml/logical-replication.sgml

The new section content looked good.

Just some nitpicks including:
- renamed the section "Replicating Sequences"
- added missing mention about how to publish sequences
- rearranged the subscription commands into a more readable list
- some sect2 titles were very long; I shortened them.
- added <warning> markup for the sequence definition advice
- other minor rewording and typo fixes

~

1.
IMO the "Caveats" section can be removed.
- the advice to avoid changing the sequence definition is already
given earlier in the "Sequence Definition Mismatches" section
- the limitation of "incremental synchronization" is already stated in
the logical replication "Limitations" section
- (FYI, I removed it already in my nitpicks attachment)

======
doc/src/sgml/ref/alter_subscription.sgml

nitpick - I reversed the paragraphs to keep the references in a natural order.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Show quoted text

On Fri, Aug 9, 2024 at 1:52 AM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 8 Aug 2024 at 08:30, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are my v20240807-0003 review comments.

2a.
The paragraph starts by saying "Sequence data is not replicated.". It
seems wrong now. Doesn't that need rewording or removing?

Changed it to incremental sequence changes.

~

2b.
Should the info "If, however, some kind of switchover or failover..."
be mentioned in the "Logical Replication Failover" section [2],
instead of here?

I think mentioning this here is appropriate. The other section focuses
more on how logical replication can proceed with a new primary. Once
the logical replication setup is complete, sequences can be refreshed
at any time.

Rest of the comments are fixed, the attached v20240808 version patch
has the changes for the same.

Regards,
Vignesh

Attachments:

PS_NITPICKS_202400809_seq_0004.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_202400809_seq_0004.txtDownload
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index d0fdcde..bc2aacc 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1571,49 +1571,88 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
  </sect1>
 
  <sect1 id="logical-replication-sequences">
-  <title>Sequences</title>
+  <title>Replicating Sequences</title>
 
   <para>
-   Sequences can be synchronized between a publisher and a subscriber using
-   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
-   to initially synchronize sequences, <link linkend="sql-altersubscription-params-refresh-publication">
-   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
-   synchronize any newly added sequences and <link linkend="sql-altersubscription-params-refresh-publication-sequences">
-   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
-   to re-synchronize all sequences. A new sequence synchronization worker will
-   be started to synchronize the sequences after executing the above commands
-   and will exit once the sequences are synchronized.
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
   </para>
 
   <para>
-   Sequence synchronization worker will be used from
-   <link linkend="guc-max-sync-workers-per-subscription">
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
    <varname>max_sync_workers_per_subscription</varname></link>
    configuration.
   </para>
 
-  <sect2 id="sequences-definition-differ-publisher-subscriber">
-   <title>Differences in Sequence Definitions Between Publisher and Subscriber</title>
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
    <para>
-    If there are differences in sequence definitions between the publisher and
-    subscriber, a WARNING is logged. To resolve this, use
+    To resolve this, use
     <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
     to align the subscriber's sequence parameters with those of the publisher.
     Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
-    It is advisable not to change sequence definitions on either the publisher
-    or the subscriber until synchronization is complete and the
+   </para>
+   <para>
+    Changes to sequence definitions during the execution of
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    may not be detected, potentially leading to inconsistent values. To avoid
+    this, refrain from modifying sequence definitions on both publisher and
+    subscriber until synchronization is complete and the
     <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsubstate</structfield>
     reaches <literal>r</literal> (ready) state.
-   </para>
+  </para>
   </sect2>
 
   <sect2 id="sequences-out-of-sync">
-   <title>Handling Sequences Out of Sync Between Publisher and Subscriber</title>
+   <title>Refreshing Stale Sequences</title>
    <para>
-    Sequence values may frequently become out of sync due to updates on the
-    publisher. To verify this, compare the sequences values between the
-    publisher and subscriber and execute
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
     if required.
@@ -1621,7 +1660,7 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
   </sect2>
 
   <sect2 id="logical-replication-sequences-examples">
-   <title>Examples: Synchronizing Sequences Between Publisher and Subscriber</title>
+   <title>Examples</title>
 
    <para>
     Create some test sequences on the publisher.
@@ -1642,7 +1681,7 @@ CREATE SEQUENCE
 </programlisting></para>
 
    <para>
-    Update  the sequences at the publisher side few times.
+    Update the sequences at the publisher side few times.
 <programlisting>
 test_pub=# SELECT nextval('s1');
  nextval
@@ -1667,14 +1706,14 @@ test_pub=# SELECT nextval('s2');
 </programlisting></para>
 
    <para>
-    Create publications for the sequences.
+    Create a publication for the sequences.
 <programlisting>
 test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
 CREATE PUBLICATION
 </programlisting></para>
 
    <para>
-    Create subscriptions for the publications.
+    Subscribe to the publication.
 <programlisting>
 test_sub=# CREATE SUBSCRIPTION sub1
 test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
@@ -1683,7 +1722,7 @@ CREATE SUBSCRIPTION
 </programlisting></para>
 
    <para>
-    Observe that initial sequence value is synchronized.
+    Observe that initial sequence values are synchronized.
 <programlisting>
 test_sub=# SELECT * FROM s1;
  last_value | log_cnt | is_called
@@ -1699,7 +1738,7 @@ test_sub=# SELECT * FROM s2;
 </programlisting></para>
 
    <para>
-    Update the sequneces at the publisher side.
+    Update the sequences at the publisher side.
 <programlisting>
 test_pub=# SELECT nextval('s1');
  nextval
@@ -1714,7 +1753,9 @@ test_pub=# SELECT nextval('s2');
 </programlisting></para>
 
    <para>
-    Re-synchronize all the sequences at the subscriber using ALTER SUBSCRIPTION REFRESH PUBLCIATIN SEQUENCES:
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
 <programlisting>
 test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
 ALTER SUBSCRIPTION
@@ -1733,35 +1774,6 @@ test_sub=# SELECT * FROM s2
 </programlisting></para>
   </sect2>
 
- <sect2 id="sequence-synchronization-caveats">
-   <title>Caveats</title>
-
-   <para>
-    At this writing, there are couple of limitations of the sequence
-    replication.  These will probably be fixed in future releases:
-
-    <itemizedlist>
-     <listitem>
-      <para>
-       Changes to sequence definitions during the execution of
-       <link linkend="sql-altersubscription-params-refresh-publication-sequences">
-       <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
-       may not be detected, potentially leading to inconsistent values. To avoid
-       this, refrain from modifying sequence definitions on both publisher and
-       subscriber until synchronization is complete and the
-       <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsubstate</structfield>
-       reaches <literal>r</literal> (ready) state.
-      </para>
-     </listitem>
-
-     <listitem>
-      <para>
-       Incremental synchronization of sequences is not supported.
-      </para>
-     </listitem>
-    </itemizedlist>
-   </para>
-  </sect2>
  </sect1>
 
  <sect1 id="logical-replication-conflicts">
@@ -1871,7 +1883,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Incremental sequence changes is not replicated.  The data in serial or
+     Incremental sequence changes are not replicated.  The data in serial or
      identity columns backed by sequences will of course be replicated as part
      of the table, but the sequence itself would still show the start value on
      the subscriber.  If the subscriber is used as a read-only database, then
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 7d1399b..457a614 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -200,7 +200,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
          </para>
          <para>
-          See <xref linkend="sequences-definition-differ-publisher-subscriber"/> for recommendations on how
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
           to handle any warnings about differences in the sequence definition
           between the publisher and the subscriber, which might occur when
           <literal>copy_data = true</literal>.
@@ -234,14 +234,14 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       will re-synchronize the sequence data for all subscribed sequences.
      </para>
      <para>
-      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
-      to identify sequences and handle out of sync sequences.
-     </para>
-     <para>
-      See <xref linkend="sequences-definition-differ-publisher-subscriber"/> for
+      See <xref linkend="sequence-definition-mismatches"/> for
       recommendations on how to handle any warnings about differences in the
       sequence definition between the publisher and the subscriber.
      </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index d39976c..1b1c999 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -264,7 +264,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>origin</literal> parameter.
          </para>
          <para>
-          See <xref linkend="sequences-definition-differ-publisher-subscriber"/>
+          See <xref linkend="sequence-definition-mismatches"/>
           for recommendations on how to handle any warnings about differences in
           the sequence definition between the publisher and the subscriber,
           which might occur when <literal>copy_data = true</literal>.
#136vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#129)
4 attachment(s)
Re: Logical Replication of sequences

On Thu, 8 Aug 2024 at 12:21, shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Aug 7, 2024 at 1:45 PM vignesh C <vignesh21@gmail.com> wrote:

The remaining comments have been addressed, and the changes are
included in the attached v20240807 version patch.

Thanks for addressing the comment. Please find few comments for v20240807 :

patch003:
2)

* The page_lsn allows the user to determine if the sequence has been updated
* since the last synchronization with the subscriber. This is done by
* comparing the current page_lsn with the value stored in pg_subscription_rel
* from the last synchronization.
*/
Datum
pg_sequence_state(PG_FUNCTION_ARGS)

--This information is still incomplete. Maybe we should mention the
other attribute name as well which helps to determine this.

I have removed this comment now as suggesting that users use
pg_sequence_state and sequence when page_lsn seems complex, the same
can be achieved by comparing the sequence values from a single
statement instead of a couple of statements. Peter had felt this would
be easier based on comment 3c at [1]/messages/by-id/CAHut+Pvaq=0xsDWdVQ-kdjRa8Az+vgiMFTvT2E2nR3N-47TO8A@mail.gmail.com.

5)
IIUC, sequencesync_failure_time is changed by multiple processes.
Seq-sync worker sets it before exiting on failure, while apply worker
resets it. Also, the applied worker reads it at a few places. Shall it
be accessed using LogicalRepWorkerLock?

If sequenceApply worker is already running, apply worker will not
access sequencesync_failure_time. Only if sequence sync worker is not
running apply worker will access sequencesync_failure_time in the
below code. I feel no need to use LogicalRepWorkerLock in this case.

...
syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
InvalidOid, WORKERTYPE_SEQUENCESYNC,
true);
if (syncworker)
{
/* Now safe to release the LWLock */
LWLockRelease(LogicalRepWorkerLock);
break;
}

/*
* Count running sync workers for this subscription, while we have the
* lock.
*/
nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);

/* Now safe to release the LWLock */
LWLockRelease(LogicalRepWorkerLock);

/*
* If there are free sync worker slot(s), start a new sequence sync
* worker, and break from the loop.
*/
if (nsyncworkers < max_sync_workers_per_subscription)
{
TimestampTz now = GetCurrentTimestamp();

if (!MyLogicalRepWorker->sequencesync_failure_time ||
TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
now, wal_retrieve_retry_interval))
{
MyLogicalRepWorker->sequencesync_failure_time = 0;

logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
MyLogicalRepWorker->dbid,
MySubscription->oid,
MySubscription->name,
MyLogicalRepWorker->userid,
InvalidOid,
DSM_HANDLE_INVALID);
break;
}
}
...

6)
process_syncing_sequences_for_apply():

--I feel MyLogicalRepWorker->sequencesync_failure_time should be reset
to 0 after we are sure that logicalrep_worker_launch() has launched
the worker without any error. But not sure what could be the clean way
to do it? If we move it after logicalrep_worker_launch() call, there
are chances that seq-sync worker has started and failed already and
has set this failure time which will then be mistakenly reset by apply
worker. Also moving it inside logicalrep_worker_launch() does not seem
a good way.

I felt we can keep it in the existing way to keep it consistent with
table sync worker restart like in process_syncing_tables_for_apply.

The rest of the comments are fixed. The rest of the comments are
fixed in the v20240809 version patch attached.

[1]: /messages/by-id/CAHut+Pvaq=0xsDWdVQ-kdjRa8Az+vgiMFTvT2E2nR3N-47TO8A@mail.gmail.com

Regards,
Vignesh

Attachments:

v20240809-0003-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240809-0003-Enhance-sequence-synchronization-during-su.patchDownload
From 4302fa2abaf3dad5dd6c91afbb2d2f9dab50e2af Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 9 Aug 2024 18:06:31 +0530
Subject: [PATCH v20240809 3/4] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  83 ++-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 361 +++++++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/launcher.c    |  70 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 505 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 148 +++--
 src/backend/replication/logical/worker.c      |  74 ++-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   6 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   5 +-
 src/include/replication/worker_internal.h     |  31 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 185 +++++++
 26 files changed, 1410 insertions(+), 193 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..a77e8101c8 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,18 +460,19 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,35 +511,41 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
 	HeapTuple	tup;
 	int			nkeys = 0;
-	ScanKeyData skey[2];
+	ScanKeyData skey;
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
-	ScanKeyInit(&skey[nkeys++],
+	ScanKeyInit(&skey,
 				Anum_pg_subscription_rel_srsubid,
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
-		ScanKeyInit(&skey[nkeys++],
-					Anum_pg_subscription_rel_srsubstate,
-					BTEqualStrategyNumber, F_CHARNE,
-					CharGetDatum(SUBREL_STATE_READY));
-
 	scan = systable_beginscan(rel, InvalidOid, false,
-							  NULL, nkeys, skey);
+							  NULL, nkeys, &skey);
 
 	while (HeapTupleIsValid(tup = systable_getnext(scan)))
 	{
@@ -532,6 +556,27 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE)
+		{
+			/* Skip sequences if they were not requested */
+			if (!get_sequences)
+				continue;
+
+			/* Skip all non-init sequences unless all_states was requested */
+			if (!all_states && (subrel->srsubstate != SUBREL_STATE_INIT))
+				continue;
+		}
+		else
+		{
+			/* Skip tables if they were not requested */
+			if (!get_tables)
+				continue;
+
+			/* Skip all ready tables unless all_states was requested */
+			if (!all_states && (subrel->srsubstate == SUBREL_STATE_READY))
+				continue;
+		}
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..3c861604e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1896,6 +1898,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..2ac63c71cb 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication,
+	 * creating this origin is unnecessary at this point. It can be created
+	 * later during the ALTER SUBSCRIPTION ... REFRESH command, if the
+	 * publication is updated to include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables and
+			 * sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +797,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is not necessary at the moment. It can be
+			 * created during the ALTER SUBSCRIPTION ... REFRESH command if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +825,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +864,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +925,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +949,16 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +977,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1002,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,7 +1019,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 rv->schemaname, rv->relname, sub->name)));
 			}
 		}
@@ -968,11 +1037,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(LOG,
+							(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											 get_namespace_name(get_rel_namespace(relid)),
+											 get_rel_name(relid),
+											 sub->name)));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,38 +1083,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
@@ -1039,6 +1138,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1527,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1542,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1583,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1602,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1658,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1900,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1927,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2285,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2463,130 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[8] = {TEXTOID, TEXTOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	List	   *seqlist = NIL;
+	StringInfo	warning_sequences = NULL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT n.nspname, c.relname, s.seqtypid,\n"
+						   "s.seqmin, s.seqmax, s.seqstart, s.seqincrement, s.seqcycle\n"
+						   "FROM pg_publication p,\n"
+						   "      LATERAL pg_get_publication_sequences(p.pubname::text) gps(relid), pg_class c\n"
+						   "      JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						   "      JOIN pg_sequence s ON c.oid = s.seqrelid\n"
+						   "WHERE c.oid = gps.relid AND p.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 8, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		Oid			seqtypid;
+		int64		seqmin;
+		int64		seqmax;
+		int64		seqstart;
+		int64		seqincrement;
+		bool		seqcycle;
+		bool		isnull;
+		RangeVar   *rv;
+		Oid			relid;
+		HeapTuple	tup;
+		Form_pg_sequence seqform;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+		seqtypid = DatumGetObjectId(slot_getattr(slot, 3, &isnull));
+		Assert(!isnull);
+		seqmin = DatumGetInt64(slot_getattr(slot, 4, &isnull));
+		Assert(!isnull);
+		seqmax = DatumGetInt64(slot_getattr(slot, 5, &isnull));
+		Assert(!isnull);
+		seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+		Assert(!isnull);
+		seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+		Assert(!isnull);
+		seqcycle = DatumGetBool(slot_getattr(slot, 8, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+		/* Get the local sequence */
+		tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(tup))
+			elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+				 get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid));
+
+		seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+		if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||
+			seqform->seqmax != seqmax || seqform->seqstart != seqstart ||
+			seqform->seqincrement != seqincrement ||
+			seqform->seqcycle != seqcycle)
+		{
+			if (!warning_sequences)
+				warning_sequences = makeStringInfo();
+			else
+				appendStringInfoString(warning_sequences, ", ");
+
+			/* Add the sequences that don't match in publisher and subscriber */
+			appendStringInfo(warning_sequences, "\"%s.%s\"",
+							 get_namespace_name(get_rel_namespace(relid)),
+							 get_rel_name(relid));
+		}
+
+		ReleaseSysCache(tup);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	if (warning_sequences)
+	{
+		/*
+		 * Issue a warning listing all the sequences that differ between the
+		 * publisher and subscriber.
+		 */
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Sequence parameter in remote and local is not same for %s",
+						warning_sequences->data),
+				errhint("Alter/Re-create the sequence using the same parameter as in remote."));
+		pfree(warning_sequences);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21a7f67256..5b14393015 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10817,11 +10817,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 77707bb384..f8dd93a83a 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..7621fa8aed 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	tablesync.o \
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..cbe7c814ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..1711fc3248 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'tablesync.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..373514452e
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,505 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+extern List *sequence_states_not_ready;
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieve sequence data (last_value, log_cnt, page_lsn and is_called)
+ * from the remote node.
+ *
+ * The sequence last_value will be returned directly, while
+ * log_cnt, is_called and page_lsn will be returned via the output
+ * parameters log_cnt, is_called and page_lsn, respectively.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
+						   char *relname, int64 *log_cnt, bool *is_called,
+						   XLogRecPtr *page_lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[4] = {INT8OID, INT8OID, BOOLOID, LSNOID};
+	int64		last_value = (Datum) 0;
+	bool		isnull;
+
+	initStringInfo(&cmd);
+
+	appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called, page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 4, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return last_value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	seq_last_value = fetch_remote_sequence_data(conn, remoteid, nspname,
+												relname, &seq_log_cnt, &seq_is_called,
+												&seq_page_lsn);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+process_syncing_sequences_for_apply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequence sync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..8a05545c27 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -130,19 +130,22 @@ typedef enum
 	SYNC_TABLE_STATE_VALID,
 } SyncingTablesState;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+static SyncingTablesState relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List *sequence_states_not_ready = NIL;
+static bool FetchTableStates(void);
 
 static StringInfo copybuf = NULL;
 
 /*
  * Exit routine for synchronization worker.
  */
-static void
+void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -157,15 +160,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -205,7 +217,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -252,7 +264,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -278,9 +290,9 @@ wait_for_worker_state_change(char expected_state)
  * Callback from syscache invalidation.
  */
 void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 }
 
 /*
@@ -387,7 +399,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -429,9 +441,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -464,6 +473,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -477,11 +494,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -518,7 +530,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -661,10 +674,12 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
-process_syncing_tables(XLogRecPtr current_lsn)
+process_syncing_relations(XLogRecPtr current_lsn)
 {
 	switch (MyLogicalRepWorker->type)
 	{
@@ -682,7 +697,20 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchTableStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -1320,7 +1348,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1564,39 +1592,48 @@ copy_table_done:
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * Copy tables that are not READY state into table_states_not_ready, and sequences
+ * that have INIT state into sequence_states_not_ready. The pg_subscription_rel
+ * catalog is shared by tables and sequences. Changes to either sequences or
+ * tables can affect the validity of relation states, so we update both
+ * table_states_not_ready and sequence_states_not_ready simultaneously
+ * to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 static bool
-FetchTableStates(bool *started_tx)
+FetchTableStates(void)
 {
-	static bool has_subrels = false;
-
-	*started_tx = false;
+	static bool has_subtables = false;
+	bool		started_tx = false;
 
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_TABLE_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/*
+		 * Fetch tables that are in non-ready state, and sequences that are in
+		 * init state.
+		 */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -1604,19 +1641,23 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
 		/*
 		 * Does the subscription have tables?
 		 *
-		 * If there were not-READY relations found then we know it does. But
+		 * If there were not-READY tables found then we know it does. But
 		 * if table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
 
 		/*
 		 * If the subscription relation cache has been invalidated since we
@@ -1625,11 +1666,17 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
 	}
 
-	return has_subrels;
+	return has_subtables;
 }
 
 /*
@@ -1709,7 +1756,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1717,7 +1764,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1731,17 +1778,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchTableStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 245e9be6f2..7ff0d2c4dc 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,8 +1030,11 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1147,8 +1155,11 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1203,8 +1214,11 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1269,8 +1283,11 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1404,8 +1421,11 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2246,8 +2266,11 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3644,8 +3667,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
+			process_syncing_relations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4554,8 +4580,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4634,6 +4660,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4642,14 +4672,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4679,8 +4712,11 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index c0a52cdcc3..c15e72802f 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3236,7 +3236,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of relation synchronization workers per subscription."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 7997b841cb..899f0299b8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12007,6 +12007,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,9 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern bool HasSubscriptionTables(Oid subid);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..479407abf7 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,10 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
+
+extern void process_syncing_sequences_for_apply(void);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6dff23fe6f 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,22 +250,27 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
+
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void process_syncing_relations(XLogRecPtr current_lsn);
+extern void invalidate_syncing_relation_states(Datum arg, int cacheid,
+											   uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -326,15 +335,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 5201280669..358c76e78e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1443,6 +1443,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..7cc8c8cfee
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,185 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+($result, my $stdout, my $stderr) = $node_subscriber->psql(
+	'postgres', "
+        ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES");
+like(
+	$stderr,
+	qr/WARNING: ( [A-Z0-9]+:)? Sequence parameter in remote and local is not same for "public.regress_s4"/,
+	"Refresh publication sequences should throw a warning if the sequence definition is not the same"
+);
+
+
+done_testing();
-- 
2.34.1

v20240809-0004-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240809-0004-Documentation-for-sequence-synchronization.patchDownload
From 195334e08244c49841f1c1e9bd99940bd44c9c41 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240809 4/4] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 393 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..28ca21a772 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a1a1d58a43..733570dd99 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5230,10 +5230,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..cad11a83a3 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,241 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+   <para>
+    Changes to sequence definitions during the execution of
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    may not be detected, potentially leading to inconsistent values. To avoid
+    this, refrain from modifying sequence definitions on both publisher and
+    subscriber until synchronization is complete and the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsubstate</structfield>
+    reaches <literal>r</literal> (ready) state.
+  </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+
+ <sect2 id="sequence-synchronization-caveats">
+   <title>Caveats</title>
+
+   <para>
+    At this writing, there are couple of limitations of the sequence
+    replication.  These will probably be fixed in future releases:
+
+  <itemizedlist>
+   <listitem>
+    <para>
+     Changes to sequence definitions during the execution of
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     may not be detected, potentially leading to inconsistent values. To avoid
+     this, refrain from modifying sequence definitions on both publisher and
+     subscriber until synchronization is complete and the
+     <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsubstate</structfield>
+     reaches <literal>r</literal> (ready) state.
+    </para>
+   </listitem>
+
+   <listitem>
+    <para>
+     Incremental synchronization of sequences is not supported.
+    </para>
+   </listitem>
+  </itemizedlist>
+   </para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1677,16 +1912,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -1999,8 +2236,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2014,7 +2251,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..1b1c9994e0 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a0b692bf1e..8373ebc5b0 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2185,6 +2190,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20240809-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240809-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 28fbae379cb91542174f5ec820e15dde8e29d61f Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240809 1/4] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 0f7154b76a..ca5be43283 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19582,6 +19582,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d36f6001bb..7997b841cb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240809-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240809-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 17da84d53c2f1d02484873485a612a41ec37a6ff Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240809 2/4] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 705 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..92758c7198 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables and
+   synchronizes all sequences:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c3f25582c3..21a7f67256 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10537,7 +10543,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10557,13 +10568,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10675,6 +10686,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19386,6 +19419,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..0ce48da963 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

#137vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#134)
Re: Logical Replication of sequences

On Fri, 9 Aug 2024 at 12:13, shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Aug 7, 2024 at 2:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 7 Aug 2024 at 08:09, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 6, 2024 at 5:13 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 5 Aug 2024 at 18:05, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Aug 5, 2024 at 11:04 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 31 Jul 2024 at 14:39, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jun 10, 2024 at 5:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Jun 2024 at 12:24, Amul Sul <sulamul@gmail.com> wrote:

On Sat, Jun 8, 2024 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Jun 2024 at 14:11, Amit Kapila <amit.kapila16@gmail.com> wrote:
[...]
A new catalog table, pg_subscription_seq, has been introduced for
mapping subscriptions to sequences. Additionally, the sequence LSN
(Log Sequence Number) is stored, facilitating determination of
sequence changes occurring before or after the returned sequence
state.

Can't it be done using pg_depend? It seems a bit excessive unless I'm missing
something.

We'll require the lsn because the sequence LSN informs the user that
it has been synchronized up to the LSN in pg_subscription_seq. Since
we are not supporting incremental sync, the user will be able to
identify if he should run refresh sequences or not by checking the lsn
of the pg_subscription_seq and the lsn of the sequence(using
pg_sequence_state added) in the publisher.

How the user will know from seq's lsn that he needs to run refresh.
lsn indicates page_lsn and thus the sequence might advance on pub
without changing lsn and thus lsn may look the same on subscriber even
though a sequence-refresh is needed. Am I missing something here?

When a sequence is synchronized to the subscriber, the page LSN of the
sequence from the publisher is also retrieved and stored in
pg_subscriber_rel as shown below:
--- Publisher page lsn
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
--------------------
(0/1510E38,65,1,t)
(1 row)
--- Subscriber stores the publisher's page lsn for the sequence
subscriber=# select * from pg_subscription_rel where srrelid = 16384;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16389 |   16384 | r          | 0/1510E38
(1 row)
If changes are made to the sequence, such as performing many nextvals,
the page LSN will be updated. Currently the sequence values are
prefetched for SEQ_LOG_VALS 32, so the lsn will not get updated for
the prefetched values, once the prefetched values are consumed the lsn
will get updated.
For example:
--- Updated LSN on the publisher (old lsn - 0/1510E38, new lsn - 0/1558CA8)
publisher=# select pg_sequence_state('seq1');
pg_sequence_state
----------------------
(0/1558CA8,143,22,t)
(1 row)

The user can then compare this updated value with the sequence's LSN
in pg_subscription_rel to determine when to re-synchronize the
sequence.

Thanks for the details. But I was referring to the case where we are
in between pre-fetched values on publisher (say at 25th value), while
on subscriber we are slightly behind (say at 15th value), but page-lsn
will be the same on both. Since the subscriber is behind, a
sequence-refresh is needed on sub, but by looking at lsn (which is
same), one can not say that for sure. Let me know if I have
misunderstood it.

Yes, at present, if the value is within the pre-fetched range, we
cannot distinguish it solely using the page_lsn.

This makes sense to me.

However, the
pg_sequence_state function also provides last_value and log_cnt, which
can be used to handle these specific cases.

BTW, can we document all these steps for users to know when to refresh
the sequences, if not already documented?

This has been documented in the v20240807 version attached at [1].
[1] - /messages/by-id/CALDaNm01Z6Oo9osGMFTOoyTR1kVoyh1rEvZ+6uJn-ZymV=0dbQ@mail.gmail.com

Vignesh, I looked at the patch dated 240808, but I could not find
these steps. Are you referring to the section ' Examples:
Synchronizing Sequences Between Publisher and Subscriber' in doc
patch004? If not, please point me to the concerned section.

I'm referring to the "Refreshing Stale Sequences" part in the
v20240809 version patch attached at [1]/messages/by-id/CALDaNm0LJCtGoBCO6DFY-RDjR8vxapW3W1f7=-LSQx=XYjqU=w@mail.gmail.com which only mentions directly
comparing the sequence values.. I have removed the reference to
pg_sequence_state now as suggesting that users use pg_sequence_state
and sequence when page_lsn seems complex, the same can be achieved by
comparing the sequence values from a single statement instead of a
couple of statements. Peter had felt this would be easier based on
comment 3c at [1]/messages/by-id/CALDaNm0LJCtGoBCO6DFY-RDjR8vxapW3W1f7=-LSQx=XYjqU=w@mail.gmail.com.

[1]: /messages/by-id/CALDaNm0LJCtGoBCO6DFY-RDjR8vxapW3W1f7=-LSQx=XYjqU=w@mail.gmail.com

Regards,
Vignesh

#138vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#133)
Re: Logical Replication of sequences

On Fri, 9 Aug 2024 at 05:51, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, I reviewed the latest v20240808-0003 patch.

Attached are my minor change suggestions.

Thanks, these changes are merged in the v20240809 version posted at [1]/messages/by-id/CALDaNm0LJCtGoBCO6DFY-RDjR8vxapW3W1f7=-LSQx=XYjqU=w@mail.gmail.com.
[1]: /messages/by-id/CALDaNm0LJCtGoBCO6DFY-RDjR8vxapW3W1f7=-LSQx=XYjqU=w@mail.gmail.com

Regards,
Vignesh

#139vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#135)
Re: Logical Replication of sequences

On Fri, 9 Aug 2024 at 12:40, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, here are my review comments for the sequences docs patch
v20240808-0004.

======
doc/src/sgml/logical-replication.sgml

The new section content looked good.

Just some nitpicks including:
- renamed the section "Replicating Sequences"
- added missing mention about how to publish sequences
- rearranged the subscription commands into a more readable list
- some sect2 titles were very long; I shortened them.
- added <warning> markup for the sequence definition advice
- other minor rewording and typo fixes

I have retained the caveats section for now, I will think more and
remove it if required in the next version.

The rest of the comments are fixed in the v20240809 version patch
attached at [1]/messages/by-id/CALDaNm0LJCtGoBCO6DFY-RDjR8vxapW3W1f7=-LSQx=XYjqU=w@mail.gmail.com.
[1]: /messages/by-id/CALDaNm0LJCtGoBCO6DFY-RDjR8vxapW3W1f7=-LSQx=XYjqU=w@mail.gmail.com

Regards,
Vignesh

#140Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#136)
2 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh,

v20240809-0001. No comments.
v20240809-0002. See below.
v20240809-0003. See below.
v20240809-0004. No comments.

//////////

Here are my review comments for patch v20240809-0002.

nit - Tweak wording in new docs example, because a publication only
publishes the sequences; it doesn't "synchronize" anything.

//////////

Here are my review comments for patch v20240809-0003.

fetch_sequence_list:
nit - move comment
nit - minor rewording for parameter WARNING message

======
.../replication/logical/sequencesync.c
src/backend/replication/logical/tablesync.c

1.
Currently the declaration 'sequence_states_not_ready' list seems
backwards. IMO it makes more sense for the declaration to be in
sequencesync.c, and the extern in the tablesync.c. (please also see
review comment #3 below which might affect this too).

~~~

2.
 static bool
-FetchTableStates(bool *started_tx)
+FetchTableStates(void)
 {
- static bool has_subrels = false;
-
- *started_tx = false;
+ static bool has_subtables = false;
+ bool started_tx = false;

Maybe give the explanation why 'has_subtables' is declared static here.

~~~

3.
I am not sure that it was an improvement to move the
process_syncing_sequences_for_apply() function into the
sequencesync.c. Calling the sequence code from the tablesync code
still looks strange. OTOH, I see why you don't want to leave it in
tablesync.c.

Perhaps it would be better to refactor/move all following functions
back to the (apply) worker.c instead:
- process_syncing_relations
- process_syncing_sequences_for_apply(void)
- process_syncing_tables_for_apply(void)

Actually, now that there are 2 kinds of 'sync' workers, maybe you
should introduce a new module (e.g. 'commonsync.c' or
'syncworker.c...), where you can put functions such as
process_syncing_relations() plus any other code common to both
tablesync and sequencesync. That might make more sense then having one
call to the other.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240812_SEQ_0001.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240812_SEQ_0001.txtDownload
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 92758c7..64214ba 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -427,8 +427,8 @@ CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
   </para>
 
   <para>
-   Create a publication that publishes all changes in all tables and
-   synchronizes all sequences:
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
 <programlisting>
 CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
 </programlisting>
PS_NITPICKS_20240812_SEQ_0003.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240812_SEQ_0003.txtDownload
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 2ac63c7..c595873 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -2546,6 +2546,7 @@ fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
 
 		seqform = (Form_pg_sequence) GETSTRUCT(tup);
 
+		/* Build a list of sequences that don't match in publisher and subscriber */
 		if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||
 			seqform->seqmax != seqmax || seqform->seqstart != seqstart ||
 			seqform->seqincrement != seqincrement ||
@@ -2556,7 +2557,6 @@ fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
 			else
 				appendStringInfoString(warning_sequences, ", ");
 
-			/* Add the sequences that don't match in publisher and subscriber */
 			appendStringInfo(warning_sequences, "\"%s.%s\"",
 							 get_namespace_name(get_rel_namespace(relid)),
 							 get_rel_name(relid));
@@ -2575,7 +2575,7 @@ fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
 		 */
 		ereport(WARNING,
 				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				errmsg("Sequence parameter in remote and local is not same for %s",
+				errmsg("Parameters differ for remote and local sequences %s",
 						warning_sequences->data),
 				errhint("Alter/Re-create the sequence using the same parameter as in remote."));
 		pfree(warning_sequences);
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
index 7cc8c8c..52453cc 100644
--- a/src/test/subscription/t/034_sequences.pl
+++ b/src/test/subscription/t/034_sequences.pl
@@ -177,7 +177,7 @@ $node_subscriber->safe_psql(
         ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES");
 like(
 	$stderr,
-	qr/WARNING: ( [A-Z0-9]+:)? Sequence parameter in remote and local is not same for "public.regress_s4"/,
+	qr/WARNING: ( [A-Z0-9]+:)? Parameters differ for remote and local sequences "public.regress_s4"/,
 	"Refresh publication sequences should throw a warning if the sequence definition is not the same"
 );
 
#141Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#136)
Re: Logical Replication of sequences

Hi Vignesh,

I noticed it is not currently possible (there is no syntax way to do
it) to ALTER an existing publication so that it will publish
SEQUENCES.

Isn't that a limitation? Why?

For example,. Why should users be prevented from changing a FOR ALL
TABLES publication into a FOR ALL TABLES, SEQUENCES one?

Similarly, there are other combinations not possible
DROP ALL SEQUENCES from a publication that is FOR ALL TABLES, SEQUENCES
DROP ALL TABLES from a publication that is FOR ALL TABLES, SEQUENCES
ADD ALL TABLES to a publication that is FOR ALL SEQUENCES
...

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#142Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#136)
Re: Logical Replication of sequences

Hi Vignesh,

I found that when 2 subscriptions are both subscribing to a
publication publishing sequences, an ERROR occurs on refresh.

======

Publisher:
----------

test_pub=# create publication pub1 for all sequences;

Subscriber:
-----------

test_sub=# create subscription sub1 connection 'dbname=test_pub'
publication pub1;

test_sub=# create subscription sub2 connection 'dbname=test_pub'
publication pub1;

test_sub=# alter subscription sub1 refresh publication sequences;
2024-08-12 15:04:04.947 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub1" set to INIT state
2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription
sub1 refresh publication sequences;
2024-08-12 15:04:04.947 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub1" set to INIT state
2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription
sub1 refresh publication sequences;
2024-08-12 15:04:04.947 AEST [7306] ERROR: tuple already updated by self
2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription
sub1 refresh publication sequences;
ERROR: tuple already updated by self

test_sub=# alter subscription sub2 refresh publication sequences;
2024-08-12 15:04:30.427 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub2" set to INIT state
2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription
sub2 refresh publication sequences;
2024-08-12 15:04:30.427 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub2" set to INIT state
2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription
sub2 refresh publication sequences;
2024-08-12 15:04:30.427 AEST [7306] ERROR: tuple already updated by self
2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription
sub2 refresh publication sequences;
ERROR: tuple already updated by self

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#143vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#140)
5 attachment(s)
Re: Logical Replication of sequences

On Mon, 12 Aug 2024 at 08:50, Peter Smith <smithpb2250@gmail.com> wrote:

~~~

3.
I am not sure that it was an improvement to move the
process_syncing_sequences_for_apply() function into the
sequencesync.c. Calling the sequence code from the tablesync code
still looks strange. OTOH, I see why you don't want to leave it in
tablesync.c.

Perhaps it would be better to refactor/move all following functions
back to the (apply) worker.c instead:
- process_syncing_relations
- process_syncing_sequences_for_apply(void)
- process_syncing_tables_for_apply(void)

Actually, now that there are 2 kinds of 'sync' workers, maybe you
should introduce a new module (e.g. 'commonsync.c' or
'syncworker.c...), where you can put functions such as
process_syncing_relations() plus any other code common to both
tablesync and sequencesync. That might make more sense then having one
call to the other.

I created syncutils.c to consolidate code that supports worker
synchronization, table synchronization, and sequence synchronization.
While it may not align exactly with your suggestion, I included
functions like finish_sync_worker, invalidate_syncing_relation_states,
FetchRelationStates, and process_syncing_relations in this new file. I
believe this organization will make the code easier to review.

The rest of the comments are also fixed in the attached v20240812
version patch attached.

Regards,
Vignesh

Attachments:

v20240812-0003-Reorganization-of-tablesync-code.patchtext/x-patch; charset=US-ASCII; name=v20240812-0003-Reorganization-of-tablesync-code.patchDownload
From 8b01de865fcf1bc86d5b9962b86a4cf4b44f173a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20240812 3/5] Reorganization of tablesync code

Reorganized tablesync code to generate a syncutils file which will
help in sequence synchronization worker code.
---
 src/backend/replication/logical/Makefile    |   1 +
 src/backend/replication/logical/meson.build |   1 +
 src/backend/replication/logical/syncutils.c | 178 ++++++++++++++++++++
 src/backend/replication/logical/tablesync.c | 161 +-----------------
 src/include/replication/worker_internal.h   |   5 +
 5 files changed, 188 insertions(+), 158 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..3964a30109 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -27,6 +27,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..27a0e30ab7 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -13,6 +13,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..4e3983602b
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,178 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains common code for synchronization of tables that will be
+ *	  help apply worker and table synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_TABLE_STATE_NEEDS_REBUILD,
+	SYNC_TABLE_STATE_REBUILD_STARTED,
+	SYNC_TABLE_STATE_VALID,
+} SyncingTablesState;
+
+static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+extern List *table_states_not_ready;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+finish_sync_worker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+{
+	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchTableStates(bool *started_tx)
+{
+	static bool has_subrels = false;
+
+	*started_tx = false;
+
+	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch all non-ready tables. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY relations found then we know it does. But
+		 * if table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subrels = (table_states_not_ready != NIL) ||
+			HasSubscriptionRelations(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			table_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	return has_subrels;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+process_syncing_tables(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			process_syncing_tables_for_sync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			process_syncing_tables_for_apply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..8776fe4e0f 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,7 +238,7 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
+void
 process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
@@ -414,7 +361,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
+void
 process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1561,77 +1477,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..fe63303439 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -250,16 +250,21 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() finish_sync_worker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
+extern bool FetchTableStates(bool *started_tx);
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
+extern void process_syncing_tables_for_apply(XLogRecPtr current_lsn);
+extern void process_syncing_sequences_for_apply(void);
 extern void invalidate_syncing_table_states(Datum arg, int cacheid,
 											uint32 hashvalue);
 
-- 
2.34.1

v20240812-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240812-0004-Enhance-sequence-synchronization-during-su.patchDownload
From a6d356e7a0de7e8a4224782a882536c99fbf1165 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:51:04 +0530
Subject: [PATCH v20240812 4/5] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  84 ++-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 361 +++++++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/launcher.c    |  70 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 505 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 122 +++--
 src/backend/replication/logical/tablesync.c   |  40 +-
 src/backend/replication/logical/worker.c      |  74 ++-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   6 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   5 +-
 src/include/replication/worker_internal.h     |  33 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 185 +++++++
 27 files changed, 1420 insertions(+), 200 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..53c022927d 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,18 +460,19 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,35 +511,40 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
 	HeapTuple	tup;
-	int			nkeys = 0;
-	ScanKeyData skey[2];
+	ScanKeyData skey;
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
-	ScanKeyInit(&skey[nkeys++],
+	ScanKeyInit(&skey,
 				Anum_pg_subscription_rel_srsubid,
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
-		ScanKeyInit(&skey[nkeys++],
-					Anum_pg_subscription_rel_srsubstate,
-					BTEqualStrategyNumber, F_CHARNE,
-					CharGetDatum(SUBREL_STATE_READY));
-
 	scan = systable_beginscan(rel, InvalidOid, false,
-							  NULL, nkeys, skey);
+							  NULL, 1, &skey);
 
 	while (HeapTupleIsValid(tup = systable_getnext(scan)))
 	{
@@ -532,6 +555,27 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE)
+		{
+			/* Skip sequences if they were not requested */
+			if (!get_sequences)
+				continue;
+
+			/* Skip all non-init sequences unless all_states was requested */
+			if (!all_states && (subrel->srsubstate != SUBREL_STATE_INIT))
+				continue;
+		}
+		else
+		{
+			/* Skip tables if they were not requested */
+			if (!get_tables)
+				continue;
+
+			/* Skip all ready tables unless all_states was requested */
+			if (!all_states && (subrel->srsubstate == SUBREL_STATE_READY))
+				continue;
+		}
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..3c861604e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1896,6 +1898,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..c59587343f 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication,
+	 * creating this origin is unnecessary at this point. It can be created
+	 * later during the ALTER SUBSCRIPTION ... REFRESH command, if the
+	 * publication is updated to include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables and
+			 * sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +797,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is not necessary at the moment. It can be
+			 * created during the ALTER SUBSCRIPTION ... REFRESH command if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +825,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +864,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +925,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +949,16 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +977,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1002,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,7 +1019,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 rv->schemaname, rv->relname, sub->name)));
 			}
 		}
@@ -968,11 +1037,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(LOG,
+							(errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											 get_namespace_name(get_rel_namespace(relid)),
+											 get_rel_name(relid),
+											 sub->name)));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,38 +1083,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
@@ -1039,6 +1138,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1527,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1542,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1583,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1602,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1658,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1900,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1927,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2285,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2463,130 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[8] = {TEXTOID, TEXTOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	List	   *seqlist = NIL;
+	StringInfo	warning_sequences = NULL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT n.nspname, c.relname, s.seqtypid,\n"
+						   "s.seqmin, s.seqmax, s.seqstart, s.seqincrement, s.seqcycle\n"
+						   "FROM pg_publication p,\n"
+						   "      LATERAL pg_get_publication_sequences(p.pubname::text) gps(relid), pg_class c\n"
+						   "      JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						   "      JOIN pg_sequence s ON c.oid = s.seqrelid\n"
+						   "WHERE c.oid = gps.relid AND p.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 8, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		Oid			seqtypid;
+		int64		seqmin;
+		int64		seqmax;
+		int64		seqstart;
+		int64		seqincrement;
+		bool		seqcycle;
+		bool		isnull;
+		RangeVar   *rv;
+		Oid			relid;
+		HeapTuple	tup;
+		Form_pg_sequence seqform;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+		seqtypid = DatumGetObjectId(slot_getattr(slot, 3, &isnull));
+		Assert(!isnull);
+		seqmin = DatumGetInt64(slot_getattr(slot, 4, &isnull));
+		Assert(!isnull);
+		seqmax = DatumGetInt64(slot_getattr(slot, 5, &isnull));
+		Assert(!isnull);
+		seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+		Assert(!isnull);
+		seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+		Assert(!isnull);
+		seqcycle = DatumGetBool(slot_getattr(slot, 8, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+		/* Get the local sequence */
+		tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(tup))
+			elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+				 get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid));
+
+		seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+		/* Build a list of sequences that don't match in publisher and subscriber */
+		if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||
+			seqform->seqmax != seqmax || seqform->seqstart != seqstart ||
+			seqform->seqincrement != seqincrement ||
+			seqform->seqcycle != seqcycle)
+		{
+			if (!warning_sequences)
+				warning_sequences = makeStringInfo();
+			else
+				appendStringInfoString(warning_sequences, ", ");
+
+			appendStringInfo(warning_sequences, "\"%s.%s\"",
+							 get_namespace_name(get_rel_namespace(relid)),
+							 get_rel_name(relid));
+		}
+
+		ReleaseSysCache(tup);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	if (warning_sequences)
+	{
+		/*
+		 * Issue a warning listing all the sequences that differ between the
+		 * publisher and subscriber.
+		 */
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Parameters differ for remote and local sequences %s",
+						warning_sequences->data),
+				errhint("Alter/Re-create the sequence using the same parameter as in remote."));
+		pfree(warning_sequences);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21a7f67256..5b14393015 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10817,11 +10817,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b83967cda3..86bc9d60a6 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 3964a30109..99d248dd01 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..cbe7c814ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 27a0e30ab7..c3c836b88b 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..df9c40f84e
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,505 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+List *sequence_states_not_ready = NIL;
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieve sequence data (last_value, log_cnt, page_lsn and is_called)
+ * from the remote node.
+ *
+ * The sequence last_value will be returned directly, while
+ * log_cnt, is_called and page_lsn will be returned via the output
+ * parameters log_cnt, is_called and page_lsn, respectively.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
+						   char *relname, int64 *log_cnt, bool *is_called,
+						   XLogRecPtr *page_lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[4] = {INT8OID, INT8OID, BOOLOID, LSNOID};
+	int64		last_value = (Datum) 0;
+	bool		isnull;
+
+	initStringInfo(&cmd);
+
+	appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called, page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 4, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return last_value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	seq_last_value = fetch_remote_sequence_data(conn, remoteid, nspname,
+												relname, &seq_log_cnt, &seq_is_called,
+												&seq_page_lsn);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+				ereport(LOG,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+process_syncing_sequences_for_apply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequence sync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 4e3983602b..ed353f1dfd 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -8,8 +8,9 @@
  *	  src/backend/replication/logical/syncutils.c
  *
  * NOTES
- *	  This file contains common code for synchronization of tables that will be
- *	  help apply worker and table synchronization worker.
+ *	  This file contains common code for synchronization of tables and
+ *	  sequences that will be help apply worker, table synchronization worker
+ *    and sequence synchronization.
  *-------------------------------------------------------------------------
  */
 
@@ -24,21 +25,25 @@
 
 typedef enum
 {
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
+	SYNC_RELATION_STATE_NEEDS_REBUILD,
+	SYNC_RELATION_STATE_REBUILD_STARTED,
+	SYNC_RELATION_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 extern List *table_states_not_ready;
+extern List *sequence_states_not_ready;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -53,15 +58,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -70,47 +84,60 @@ finish_sync_worker(void)
  * Callback from syscache invalidation.
  */
 void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 }
 
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * Copy tables that are not READY state into table_states_not_ready, and
+ * sequences that have INIT state into sequence_states_not_ready. The
+ * pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchTableStates(bool *started_tx)
+FetchRelationStates(void)
 {
-	static bool has_subrels = false;
-
-	*started_tx = false;
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
+	static bool has_subtables = false;
+	bool		started_tx = false;
 
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_RELATION_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_RELATION_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/*
+		 * Fetch tables that are in non-ready state, and sequences that are in
+		 * init state.
+		 */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -118,19 +145,23 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
 		/*
 		 * Does the subscription have tables?
 		 *
-		 * If there were not-READY relations found then we know it does. But
+		 * If there were not-READY tables found then we know it does. But
 		 * if table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
 
 		/*
 		 * If the subscription relation cache has been invalidated since we
@@ -139,18 +170,26 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_RELATION_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATION_STATE_VALID;
 	}
 
-	return has_subrels;
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	return has_subtables;
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
-process_syncing_tables(XLogRecPtr current_lsn)
+process_syncing_relations(XLogRecPtr current_lsn)
 {
 	switch (MyLogicalRepWorker->type)
 	{
@@ -168,7 +207,20 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8776fe4e0f..e26a877d75 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,7 +465,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -1236,7 +1237,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1554,7 +1555,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1562,7 +1563,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1576,17 +1577,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 245e9be6f2..7ff0d2c4dc 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,8 +1030,11 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1147,8 +1155,11 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1203,8 +1214,11 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1269,8 +1283,11 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1404,8 +1421,11 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2246,8 +2266,11 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3644,8 +3667,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
+			process_syncing_relations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4554,8 +4580,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4634,6 +4660,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4642,14 +4672,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4679,8 +4712,11 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 636780673b..f9da570014 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3241,7 +3241,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of relation synchronization workers per subscription."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index e83d6846f3..8cf1d86b5f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12007,6 +12007,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,9 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern bool HasSubscriptionTables(Oid subid);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..479407abf7 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,10 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
+
+extern void process_syncing_sequences_for_apply(void);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index fe63303439..c57afbbcb0 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,27 +250,30 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() finish_sync_worker(void);
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
-extern bool FetchTableStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_relations(XLogRecPtr current_lsn);
 extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
 extern void process_syncing_tables_for_apply(XLogRecPtr current_lsn);
 extern void process_syncing_sequences_for_apply(void);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void invalidate_syncing_relation_states(Datum arg, int cacheid,
+											   uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -331,15 +338,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..d74b9a8259 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..52453cce2d
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,185 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+($result, my $stdout, my $stderr) = $node_subscriber->psql(
+	'postgres', "
+        ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES");
+like(
+	$stderr,
+	qr/WARNING: ( [A-Z0-9]+:)? Parameters differ for remote and local sequences "public.regress_s4"/,
+	"Refresh publication sequences should throw a warning if the sequence definition is not the same"
+);
+
+
+done_testing();
-- 
2.34.1

v20240812-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240812-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 854d7c753694dcc394cda023bd68167afc5b1925 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240812 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 705 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..64214ba8d5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c3f25582c3..21a7f67256 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10537,7 +10543,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10557,13 +10568,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10675,6 +10686,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19386,6 +19419,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..0ce48da963 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240812-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240812-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From c6e6ed3bd9e9aa60a21ee87570be5ec6ba67cb56 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240812 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 968a998552..a802130774 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19582,6 +19582,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index be37576a34..e83d6846f3 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240812-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240812-0005-Documentation-for-sequence-synchronization.patchDownload
From 666198eb49b4a13bf75dde2ed17a589cff3d570d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240812 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 393 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..28ca21a772 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a1a1d58a43..733570dd99 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5230,10 +5230,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..cad11a83a3 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,241 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+   <para>
+    Changes to sequence definitions during the execution of
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    may not be detected, potentially leading to inconsistent values. To avoid
+    this, refrain from modifying sequence definitions on both publisher and
+    subscriber until synchronization is complete and the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsubstate</structfield>
+    reaches <literal>r</literal> (ready) state.
+  </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+
+ <sect2 id="sequence-synchronization-caveats">
+   <title>Caveats</title>
+
+   <para>
+    At this writing, there are couple of limitations of the sequence
+    replication.  These will probably be fixed in future releases:
+
+  <itemizedlist>
+   <listitem>
+    <para>
+     Changes to sequence definitions during the execution of
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     may not be detected, potentially leading to inconsistent values. To avoid
+     this, refrain from modifying sequence definitions on both publisher and
+     subscriber until synchronization is complete and the
+     <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsubstate</structfield>
+     reaches <literal>r</literal> (ready) state.
+    </para>
+   </listitem>
+
+   <listitem>
+    <para>
+     Incremental synchronization of sequences is not supported.
+    </para>
+   </listitem>
+  </itemizedlist>
+   </para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1677,16 +1912,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -1999,8 +2236,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2014,7 +2251,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..1b1c9994e0 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 634a4c0fab..4261637af7 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

#144vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#142)
Re: Logical Replication of sequences

On Mon, 12 Aug 2024 at 10:40, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

I found that when 2 subscriptions are both subscribing to a
publication publishing sequences, an ERROR occurs on refresh.

======

Publisher:
----------

test_pub=# create publication pub1 for all sequences;

Subscriber:
-----------

test_sub=# create subscription sub1 connection 'dbname=test_pub'
publication pub1;

test_sub=# create subscription sub2 connection 'dbname=test_pub'
publication pub1;

test_sub=# alter subscription sub1 refresh publication sequences;
2024-08-12 15:04:04.947 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub1" set to INIT state
2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription
sub1 refresh publication sequences;
2024-08-12 15:04:04.947 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub1" set to INIT state
2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription
sub1 refresh publication sequences;
2024-08-12 15:04:04.947 AEST [7306] ERROR: tuple already updated by self
2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription
sub1 refresh publication sequences;
ERROR: tuple already updated by self

test_sub=# alter subscription sub2 refresh publication sequences;
2024-08-12 15:04:30.427 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub2" set to INIT state
2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription
sub2 refresh publication sequences;
2024-08-12 15:04:30.427 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub2" set to INIT state
2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription
sub2 refresh publication sequences;
2024-08-12 15:04:30.427 AEST [7306] ERROR: tuple already updated by self
2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription
sub2 refresh publication sequences;
ERROR: tuple already updated by self

This issue is fixed in the v20240812 version attached at [1]/messages/by-id/CALDaNm3hS58W0RTbgsMTk-YvXwt956uabA=kYfLGUs3uRNC2Qg@mail.gmail.com.
[1]: /messages/by-id/CALDaNm3hS58W0RTbgsMTk-YvXwt956uabA=kYfLGUs3uRNC2Qg@mail.gmail.com

Regards,
Vignesh

#145vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#141)
Re: Logical Replication of sequences

On Mon, 12 Aug 2024 at 09:59, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

I noticed it is not currently possible (there is no syntax way to do
it) to ALTER an existing publication so that it will publish
SEQUENCES.

Isn't that a limitation? Why?

For example,. Why should users be prevented from changing a FOR ALL
TABLES publication into a FOR ALL TABLES, SEQUENCES one?

Similarly, there are other combinations not possible
DROP ALL SEQUENCES from a publication that is FOR ALL TABLES, SEQUENCES
DROP ALL TABLES from a publication that is FOR ALL TABLES, SEQUENCES
ADD ALL TABLES to a publication that is FOR ALL SEQUENCES

Yes, this should be addressed. However, I'll defer it until the
current set of patches is finalized and all comments have been
resolved.

Regards,
Vignesh

#146Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#143)
2 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh, Here are my review comments for latest v20240812* patchset:

patch v20240812-0001. No comments.
patch v20240812-0002. Fixed docs.LGTM
patch v20240812-0003. This is new refactoring. See below.
patch v20240812-0004. (was 0003). See below.
patch v20240812-0005. (was 0004). No comments.

//////

patch v20240812-0003.

3.1. GENERAL

Hmm. I am guessing this was provided as a separate patch to aid review
by showing that existing functions are moved? OTOH you can't really
judge this patch properly without already knowing details of what will
come next in the sequencesync. i.e. As a *standalone* patch without
the sequencesync.c the refactoring doesn't make much sense.

Maybe it is OK later to combine patches 0003 and 0004. Alternatively,
keep this patch separated but give greater emphasis in the comment
header to say this patch only exists separately in order to help the
review.

======
Commit message

3.2.
Reorganized tablesync code to generate a syncutils file which will
help in sequence synchronization worker code.

~

"generate" ??

======
src/backend/replication/logical/syncutils.c

3.3. "common code" ??

FYI - There are multiple code comments mentioning "common code..."
which, in the absence of the sequencesync worker (which comes in the
next patch), have nothing "common" about them at all. Fixing them and
then fixing them again in the next patch might cause unnecessary code
churn, but OTOH they aren't correct as-is either. I have left them
alone for now.

~

3.4. function names

With the re-shuffling that this patch does, and changing several from
static to not-static, should the function names remain as they are?
They look random to me.
- finish_sync_worker(void)
- invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
- FetchTableStates(bool *started_tx)
- process_syncing_tables(XLogRecPtr current_lsn)

I think using a consistent naming convention would be better. e.g.
SyncFinishWorker
SyncInvalidateTableStates
SyncFetchTableStates
SyncProcessTables

~~~

nit - file header comment

======
src/backend/replication/logical/tablesync.c

3.5.
-static void
+void
 process_syncing_tables_for_sync(XLogRecPtr current_lsn)
-static void
+void
 process_syncing_tables_for_apply(XLogRecPtr current_lsn)

Since these functions are no longer static should those function names
be changed to use the CamelCase convention for non-static API?

//////////

patch v20240812-0004.

======
src/backend/replication/logical/syncutils.c

nit - file header comment (made same as patch 0003)

~

FetchRelationStates:
nit - IIUC sequence states are only INIT -> READY. So the comments in
this function dont need to specifically talk about sequence INIT
state.

======
src/backend/utils/misc/guc_tables.c

4.1.
  {"max_sync_workers_per_subscription",
  PGC_SIGHUP,
  REPLICATION_SUBSCRIBERS,
- gettext_noop("Maximum number of table synchronization workers per
subscription."),
+ gettext_noop("Maximum number of relation synchronization workers per
subscription."),
  NULL,
  },

I was wondering if "relation synchronization workers" is meaningful to
the user because that seems like new terminology.
Maybe it should say "... of table + sequence synchronization workers..."

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240813_SEQ_0003.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240813_SEQ_0003.txtDownload
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 4e39836..4bbc481 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -1,5 +1,5 @@
 /*-------------------------------------------------------------------------
- * sequencesync.c
+ * syncutils.c
  *	  PostgreSQL logical replication: common synchronization code
  *
  * Copyright (c) 2024, PostgreSQL Global Development Group
@@ -8,8 +8,8 @@
  *	  src/backend/replication/logical/syncutils.c
  *
  * NOTES
- *	  This file contains common code for synchronization of tables that will be
- *	  help apply worker and table synchronization worker.
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
  *-------------------------------------------------------------------------
  */
 
PS_NITPICKS_20240813_SEQ_0004.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240813_SEQ_0004.txtDownload
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index ed353f1..1702be9 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -8,9 +8,8 @@
  *	  src/backend/replication/logical/syncutils.c
  *
  * NOTES
- *	  This file contains common code for synchronization of tables and
- *	  sequences that will be help apply worker, table synchronization worker
- *    and sequence synchronization.
+ *	  This file contains code common to table synchronization workers, and
+ *    the sequence synchronization worker.
  *-------------------------------------------------------------------------
  */
 
@@ -93,7 +92,7 @@ invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
  * Copy tables that are not READY state into table_states_not_ready, and
- * sequences that have INIT state into sequence_states_not_ready. The
+ * sequences that are not READY state into sequence_states_not_ready. The
  * pg_subscription_rel catalog is shared by tables and sequences. Changes to
  * either sequences or tables can affect the validity of relation states, so we
  * update both table_states_not_ready and sequence_states_not_ready
@@ -132,10 +131,7 @@ FetchRelationStates(void)
 			started_tx = true;
 		}
 
-		/*
-		 * Fetch tables that are in non-ready state, and sequences that are in
-		 * init state.
-		 */
+		/* Fetch tables and sequences that are in non-READY state. */
 		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   false);
 
#147Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#144)
Re: Logical Replication of sequences

On Mon, Aug 12, 2024 at 11:07 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 12 Aug 2024 at 10:40, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

I found that when 2 subscriptions are both subscribing to a
publication publishing sequences, an ERROR occurs on refresh.

======

Publisher:
----------

test_pub=# create publication pub1 for all sequences;

Subscriber:
-----------

test_sub=# create subscription sub1 connection 'dbname=test_pub'
publication pub1;

test_sub=# create subscription sub2 connection 'dbname=test_pub'
publication pub1;

test_sub=# alter subscription sub1 refresh publication sequences;
2024-08-12 15:04:04.947 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub1" set to INIT state
2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription
sub1 refresh publication sequences;
2024-08-12 15:04:04.947 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub1" set to INIT state
2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription
sub1 refresh publication sequences;
2024-08-12 15:04:04.947 AEST [7306] ERROR: tuple already updated by self
2024-08-12 15:04:04.947 AEST [7306] STATEMENT: alter subscription
sub1 refresh publication sequences;
ERROR: tuple already updated by self

test_sub=# alter subscription sub2 refresh publication sequences;
2024-08-12 15:04:30.427 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub2" set to INIT state
2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription
sub2 refresh publication sequences;
2024-08-12 15:04:30.427 AEST [7306] LOG: sequence "public.seq1" of
subscription "sub2" set to INIT state
2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription
sub2 refresh publication sequences;
2024-08-12 15:04:30.427 AEST [7306] ERROR: tuple already updated by self
2024-08-12 15:04:30.427 AEST [7306] STATEMENT: alter subscription
sub2 refresh publication sequences;
ERROR: tuple already updated by self

This issue is fixed in the v20240812 version attached at [1].
[1] - /messages/by-id/CALDaNm3hS58W0RTbgsMTk-YvXwt956uabA=kYfLGUs3uRNC2Qg@mail.gmail.com

Yes, I confirmed it is now fixed. Thanks!

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#148Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#143)
Re: Logical Replication of sequences

Hi Vignesh,

I have been using the latest patchset, trying a few things using many
(1000) sequences.

Here are some observations, plus some suggestions for consideration.

~~~~~

OBSERVATION #1

When 1000s of sequences are refreshed using REFRESH PUBLICATION
SEQUENCES the logging is excessive. For example, since there is only
one sequencesync worker why does it need to broadcast that it is
"finished" separately for every single sequence. That is giving 1000s
of lines of logs which don't seem to be of much interest to a user.

...
2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0918" has
finished
2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0919" has
finished
2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0920" has
finished
2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0921" has
finished
2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0922" has
finished
2024-08-13 16:17:04.151 AEST [5002] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0923" has
finished
...

Perhaps just LOG when each "batch" is completed, but the individual
sequence finished logs can just be DEBUG information?

~~~~~

OBSERVATION #2

When 1000s of sequences are refreshed (set to INIT) then there are
1000s of logs like below:

...
2024-08-13 16:13:57.873 AEST [10301] LOG: sequence "public.seq_0698"
of subscription "sub3" set to INIT state
2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription
sub3 refresh publication sequences;
2024-08-13 16:13:57.873 AEST [10301] LOG: sequence "public.seq_0699"
of subscription "sub3" set to INIT state
2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription
sub3 refresh publication sequences;
2024-08-13 16:13:57.873 AEST [10301] LOG: sequence "public.seq_0700"
of subscription "sub3" set to INIT state
2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription
sub3 refresh publication sequences;
2024-08-13 16:13:57.873 AEST [10301] LOG: sequence "public.seq_0701"
of subscription "sub3" set to INIT state
2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription
sub3 refresh publication sequences;
2024-08-13 16:13:57.874 AEST [10301] LOG: sequence "public.seq_0702"
of subscription "sub3" set to INIT state
2024-08-13 16:13:57.874 AEST [10301] STATEMENT: alter subscription
sub3 refresh publication sequences;
...

I felt that showing the STATEMENT for all of these is overkill. How
about changing that ereport LOG so it does not emit the statement 1000
times? Or, maybe you can implement it as a "dynamic" log that emits
the STATEMENT if there are only a few logs a few times but skips it
for the next 995 logs.

~~~~~

OBSERVATION #3

The WARNING about mismatched sequences currently looks like this:

2024-08-13 16:41:45.496 AEST [10301] WARNING: Parameters differ for
remote and local sequences "public.seq_0999"
2024-08-13 16:41:45.496 AEST [10301] HINT: Alter/Re-create the
sequence using the same parameter as in remote.

Although you could probably deduce it from nearby logs, I think it
might be more helpful to also identify the subscription name in this
WARNING message. Otherwise, if there are many publications the user
may have no idea where the mismatched "remote" is coming from.

~~~~

OBSERVATION #4

When 1000s of sequences are refreshed then there are 1000s of
associated logs. But (given there is only one sequencesync worker)
those logs are not always the order that I was expecting to see them.

e.g.
...
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0885" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0887" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0888" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0889" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0890" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0906" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0566" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0568" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0569" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0570" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0571" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0582" has
finished
...

Is there a way to refresh sequences in a more natural (e.g.
alphabetical) order to make these logs more readable?

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#149vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#146)
5 attachment(s)
Re: Logical Replication of sequences

On Tue, 13 Aug 2024 at 09:19, Peter Smith <smithpb2250@gmail.com> wrote:

3.1. GENERAL

Hmm. I am guessing this was provided as a separate patch to aid review
by showing that existing functions are moved? OTOH you can't really
judge this patch properly without already knowing details of what will
come next in the sequencesync. i.e. As a *standalone* patch without
the sequencesync.c the refactoring doesn't make much sense.

Maybe it is OK later to combine patches 0003 and 0004. Alternatively,
keep this patch separated but give greater emphasis in the comment
header to say this patch only exists separately in order to help the
review.

I have kept this patch only to show that this patch as such has no
code changes. If we move this to the next patch it will be difficult
for reviewers to know which is new code and which is old code. During
commit we can merge this with the next one. I felt it is better to add
it in the commit message instead of comment header so updated the
commit message.

======
src/backend/replication/logical/syncutils.c

3.3. "common code" ??

FYI - There are multiple code comments mentioning "common code..."
which, in the absence of the sequencesync worker (which comes in the
next patch), have nothing "common" about them at all. Fixing them and
then fixing them again in the next patch might cause unnecessary code
churn, but OTOH they aren't correct as-is either. I have left them
alone for now.

We can ignore this as this will get merged to the next one. If you
have any comments you can give it on top of the next(0004) patch.

~

3.4. function names

With the re-shuffling that this patch does, and changing several from
static to not-static, should the function names remain as they are?
They look random to me.
- finish_sync_worker(void)
- invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
- FetchTableStates(bool *started_tx)
- process_syncing_tables(XLogRecPtr current_lsn)

I think using a consistent naming convention would be better. e.g.
SyncFinishWorker
SyncInvalidateTableStates
SyncFetchTableStates
SyncProcessTables

One advantage with keeping the existing names the same wherever
possible will help while merging the changes to back-branches. So I'm
not making this change.

~~~

nit - file header comment

======
src/backend/replication/logical/tablesync.c

3.5.
-static void
+void
process_syncing_tables_for_sync(XLogRecPtr current_lsn)
-static void
+void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)

Since these functions are no longer static should those function names
be changed to use the CamelCase convention for non-static API?

One advantage with keeping the existing names the same wherever
possible will help while merging the changes to back-branches. So I'm
not making this change.

The rest of the comments were fixed, the attached v20240813 has the
changes for the same.

Regards,
Vignesh

Attachments:

v20240813-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240813-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 854d7c753694dcc394cda023bd68167afc5b1925 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240813 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 705 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..64214ba8d5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c3f25582c3..21a7f67256 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10537,7 +10543,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10557,13 +10568,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10675,6 +10686,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19386,6 +19419,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..0ce48da963 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240813-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240813-0005-Documentation-for-sequence-synchronization.patchDownload
From d2aabee40f57fdaf567fac98483df85fce931bac Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240813 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 393 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..28ca21a772 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a1a1d58a43..733570dd99 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5230,10 +5230,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..cad11a83a3 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,241 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+   <para>
+    Changes to sequence definitions during the execution of
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    may not be detected, potentially leading to inconsistent values. To avoid
+    this, refrain from modifying sequence definitions on both publisher and
+    subscriber until synchronization is complete and the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsubstate</structfield>
+    reaches <literal>r</literal> (ready) state.
+  </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+
+ <sect2 id="sequence-synchronization-caveats">
+   <title>Caveats</title>
+
+   <para>
+    At this writing, there are couple of limitations of the sequence
+    replication.  These will probably be fixed in future releases:
+
+  <itemizedlist>
+   <listitem>
+    <para>
+     Changes to sequence definitions during the execution of
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     may not be detected, potentially leading to inconsistent values. To avoid
+     this, refrain from modifying sequence definitions on both publisher and
+     subscriber until synchronization is complete and the
+     <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsubstate</structfield>
+     reaches <literal>r</literal> (ready) state.
+    </para>
+   </listitem>
+
+   <listitem>
+    <para>
+     Incremental synchronization of sequences is not supported.
+    </para>
+   </listitem>
+  </itemizedlist>
+   </para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1677,16 +1912,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -1999,8 +2236,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2014,7 +2251,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..1b1c9994e0 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 634a4c0fab..4261637af7 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20240813-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20240813-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From d377c11226cd98214c3ee0335afa6793f58674b1 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20240813 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/replication/logical/Makefile    |   1 +
 src/backend/replication/logical/meson.build |   1 +
 src/backend/replication/logical/syncutils.c | 178 ++++++++++++++++++++
 src/backend/replication/logical/tablesync.c | 161 +-----------------
 src/include/replication/worker_internal.h   |   5 +
 5 files changed, 188 insertions(+), 158 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..3964a30109 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -27,6 +27,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..27a0e30ab7 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -13,6 +13,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..4bbc4814a4
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,178 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_TABLE_STATE_NEEDS_REBUILD,
+	SYNC_TABLE_STATE_REBUILD_STARTED,
+	SYNC_TABLE_STATE_VALID,
+} SyncingTablesState;
+
+static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+extern List *table_states_not_ready;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+finish_sync_worker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+{
+	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchTableStates(bool *started_tx)
+{
+	static bool has_subrels = false;
+
+	*started_tx = false;
+
+	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch all non-ready tables. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY relations found then we know it does. But
+		 * if table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subrels = (table_states_not_ready != NIL) ||
+			HasSubscriptionRelations(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			table_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	return has_subrels;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+process_syncing_tables(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			process_syncing_tables_for_sync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			process_syncing_tables_for_apply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..8776fe4e0f 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,7 +238,7 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
+void
 process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
@@ -414,7 +361,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
+void
 process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1561,77 +1477,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..fe63303439 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -250,16 +250,21 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() finish_sync_worker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
+extern bool FetchTableStates(bool *started_tx);
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
+extern void process_syncing_tables_for_apply(XLogRecPtr current_lsn);
+extern void process_syncing_sequences_for_apply(void);
 extern void invalidate_syncing_table_states(Datum arg, int cacheid,
 											uint32 hashvalue);
 
-- 
2.34.1

v20240813-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240813-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 5f7e0effb695b0d7a15095b5c1eece4788eed038 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 13 Aug 2024 11:44:51 +0530
Subject: [PATCH v20240813 4/5] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  60 ++-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 363 +++++++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/launcher.c    |  70 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 510 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 112 ++--
 src/backend/replication/logical/tablesync.c   |  40 +-
 src/backend/replication/logical/worker.c      |  74 ++-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   6 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   5 +-
 src/include/replication/worker_internal.h     |  33 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 184 +++++++
 27 files changed, 1403 insertions(+), 189 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..d938e5700f 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,18 +460,19 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +511,21 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -507,6 +534,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -514,7 +544,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -529,8 +559,16 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE && !get_sequences) ||
+			/* Skip tables if they were not requested */
+			(relkind != RELKIND_SEQUENCE && !get_tables))
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..3c861604e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1896,6 +1898,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..4166210643 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication,
+	 * creating this origin is unnecessary at this point. It can be created
+	 * later during the ALTER SUBSCRIPTION ... REFRESH command, if the
+	 * publication is updated to include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables and
+			 * sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, stmt->subname,
+														publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +798,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is not necessary at the moment. It can be
+			 * created during the ALTER SUBSCRIPTION ... REFRESH command if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +826,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +865,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +926,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +950,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->name,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +979,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1004,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,7 +1021,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 rv->schemaname, rv->relname, sub->name)));
 			}
 		}
@@ -968,11 +1039,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,38 +1085,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
@@ -1039,6 +1140,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1544,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1585,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1604,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1660,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1902,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1929,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2287,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2465,130 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[8] = {TEXTOID, TEXTOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	List	   *seqlist = NIL;
+	StringInfo	warning_sequences = NULL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT n.nspname, c.relname, s.seqtypid,\n"
+						   "s.seqmin, s.seqmax, s.seqstart, s.seqincrement, s.seqcycle\n"
+						   "FROM pg_publication p,\n"
+						   "      LATERAL pg_get_publication_sequences(p.pubname::text) gps(relid), pg_class c\n"
+						   "      JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						   "      JOIN pg_sequence s ON c.oid = s.seqrelid\n"
+						   "WHERE c.oid = gps.relid AND p.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 8, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		Oid			seqtypid;
+		int64		seqmin;
+		int64		seqmax;
+		int64		seqstart;
+		int64		seqincrement;
+		bool		seqcycle;
+		bool		isnull;
+		RangeVar   *rv;
+		Oid			relid;
+		HeapTuple	tup;
+		Form_pg_sequence seqform;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+		seqtypid = DatumGetObjectId(slot_getattr(slot, 3, &isnull));
+		Assert(!isnull);
+		seqmin = DatumGetInt64(slot_getattr(slot, 4, &isnull));
+		Assert(!isnull);
+		seqmax = DatumGetInt64(slot_getattr(slot, 5, &isnull));
+		Assert(!isnull);
+		seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+		Assert(!isnull);
+		seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+		Assert(!isnull);
+		seqcycle = DatumGetBool(slot_getattr(slot, 8, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+		/* Get the local sequence */
+		tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+		if (!HeapTupleIsValid(tup))
+			elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+				 get_namespace_name(get_rel_namespace(relid)), get_rel_name(relid));
+
+		seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+		/* Build a list of sequences that don't match in publisher and subscriber */
+		if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||
+			seqform->seqmax != seqmax || seqform->seqstart != seqstart ||
+			seqform->seqincrement != seqincrement ||
+			seqform->seqcycle != seqcycle)
+		{
+			if (!warning_sequences)
+				warning_sequences = makeStringInfo();
+			else
+				appendStringInfoString(warning_sequences, ", ");
+
+			appendStringInfo(warning_sequences, "\"%s.%s\"",
+							 get_namespace_name(get_rel_namespace(relid)),
+							 get_rel_name(relid));
+		}
+
+		ReleaseSysCache(tup);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	if (warning_sequences)
+	{
+		/*
+		 * Issue a warning listing all the sequences that differ between the
+		 * publisher and subscriber.
+		 */
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ between the remote and local sequences %s for subscription \"%s\"",
+					   warning_sequences->data, subname),
+				errhint("Alter/Re-create the sequence using the same parameter as in remote."));
+		pfree(warning_sequences);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21a7f67256..5b14393015 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10817,11 +10817,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b83967cda3..86bc9d60a6 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 3964a30109..99d248dd01 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..cbe7c814ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 27a0e30ab7..c3c836b88b 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..aaedba9e07
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,510 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/usercontext.h"
+
+List *sequence_states_not_ready = NIL;
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieve sequence data (last_value, log_cnt, page_lsn and is_called)
+ * from the remote node.
+ *
+ * The sequence last_value will be returned directly, while
+ * log_cnt, is_called and page_lsn will be returned via the output
+ * parameters log_cnt, is_called and page_lsn, respectively.
+ */
+static int64
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid remoteid, char *nspname,
+						   char *relname, int64 *log_cnt, bool *is_called,
+						   XLogRecPtr *page_lsn)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[4] = {INT8OID, INT8OID, BOOLOID, LSNOID};
+	int64		last_value = (Datum) 0;
+	bool		isnull;
+
+	initStringInfo(&cmd);
+
+	appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called, page_lsn "
+					 "FROM pg_sequence_state(%d)", remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 4, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+
+	return last_value;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	seq_last_value = fetch_remote_sequence_data(conn, remoteid, nspname,
+												relname, &seq_log_cnt, &seq_is_called,
+												&seq_page_lsn);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d sequences of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+process_syncing_sequences_for_apply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequence sync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 4bbc4814a4..4f6e66b70b 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -24,21 +24,25 @@
 
 typedef enum
 {
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
+	SYNC_RELATION_STATE_NEEDS_REBUILD,
+	SYNC_RELATION_STATE_REBUILD_STARTED,
+	SYNC_RELATION_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 extern List *table_states_not_ready;
+extern List *sequence_states_not_ready;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -53,15 +57,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -70,47 +83,55 @@ finish_sync_worker(void)
  * Callback from syscache invalidation.
  */
 void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 }
 
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchTableStates(bool *started_tx)
+FetchRelationStates(void)
 {
-	static bool has_subrels = false;
-
-	*started_tx = false;
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
+	static bool has_subtables = false;
+	bool		started_tx = false;
 
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_RELATION_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_RELATION_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -118,19 +139,23 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
 		/*
 		 * Does the subscription have tables?
 		 *
-		 * If there were not-READY relations found then we know it does. But
+		 * If there were not-READY tables found then we know it does. But
 		 * if table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
 
 		/*
 		 * If the subscription relation cache has been invalidated since we
@@ -139,18 +164,26 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_RELATION_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATION_STATE_VALID;
 	}
 
-	return has_subrels;
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	return has_subtables;
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
-process_syncing_tables(XLogRecPtr current_lsn)
+process_syncing_relations(XLogRecPtr current_lsn)
 {
 	switch (MyLogicalRepWorker->type)
 	{
@@ -168,7 +201,20 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8776fe4e0f..e26a877d75 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,7 +465,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -1236,7 +1237,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1554,7 +1555,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1562,7 +1563,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1576,17 +1577,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 245e9be6f2..7ff0d2c4dc 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,8 +1030,11 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1147,8 +1155,11 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1203,8 +1214,11 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1269,8 +1283,11 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1404,8 +1421,11 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2246,8 +2266,11 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3644,8 +3667,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
+			process_syncing_relations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4554,8 +4580,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4634,6 +4660,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4642,14 +4672,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4679,8 +4712,11 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 636780673b..e7af1ba4b2 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3241,7 +3241,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Max workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index e83d6846f3..8cf1d86b5f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12007,6 +12007,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,9 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern bool HasSubscriptionTables(Oid subid);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..479407abf7 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,10 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
+
+extern void process_syncing_sequences_for_apply(void);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index fe63303439..c57afbbcb0 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,27 +250,30 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() finish_sync_worker(void);
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
-extern bool FetchTableStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_relations(XLogRecPtr current_lsn);
 extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
 extern void process_syncing_tables_for_apply(XLogRecPtr current_lsn);
 extern void process_syncing_sequences_for_apply(void);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void invalidate_syncing_relation_states(Datum arg, int cacheid,
+											   uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -331,15 +338,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..d74b9a8259 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..999561f44a
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,184 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+($result, my $stdout, my $stderr) = $node_subscriber->psql(
+	'postgres', "
+        ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES");
+like(
+	$stderr,
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ between the remote and local sequences "public.regress_s4" for subscription "regress_seq_sub"/,
+	"Refresh publication sequences should throw a warning if the sequence definition is not the same"
+);
+
+done_testing();
-- 
2.34.1

v20240813-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240813-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From c6e6ed3bd9e9aa60a21ee87570be5ec6ba67cb56 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240813 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 968a998552..a802130774 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19582,6 +19582,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index be37576a34..e83d6846f3 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

#150vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#148)
Re: Logical Replication of sequences

On Tue, 13 Aug 2024 at 12:31, Peter Smith <smithpb2250@gmail.com> wrote:

OBSERVATION #2

When 1000s of sequences are refreshed (set to INIT) then there are
1000s of logs like below:

...
2024-08-13 16:13:57.873 AEST [10301] LOG: sequence "public.seq_0698"
of subscription "sub3" set to INIT state
2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription
sub3 refresh publication sequences;
2024-08-13 16:13:57.873 AEST [10301] LOG: sequence "public.seq_0699"
of subscription "sub3" set to INIT state
2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription
sub3 refresh publication sequences;
2024-08-13 16:13:57.873 AEST [10301] LOG: sequence "public.seq_0700"
of subscription "sub3" set to INIT state
2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription
sub3 refresh publication sequences;
2024-08-13 16:13:57.873 AEST [10301] LOG: sequence "public.seq_0701"
of subscription "sub3" set to INIT state
2024-08-13 16:13:57.873 AEST [10301] STATEMENT: alter subscription
sub3 refresh publication sequences;
2024-08-13 16:13:57.874 AEST [10301] LOG: sequence "public.seq_0702"
of subscription "sub3" set to INIT state
2024-08-13 16:13:57.874 AEST [10301] STATEMENT: alter subscription
sub3 refresh publication sequences;
...

I felt that showing the STATEMENT for all of these is overkill. How
about changing that ereport LOG so it does not emit the statement 1000
times? Or, maybe you can implement it as a "dynamic" log that emits
the STATEMENT if there are only a few logs a few times but skips it
for the next 995 logs.

I have changed it to debug1 log level how we do for tables, so this
will not appear for default log level

OBSERVATION #4

When 1000s of sequences are refreshed then there are 1000s of
associated logs. But (given there is only one sequencesync worker)
those logs are not always the order that I was expecting to see them.

e.g.
...
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0885" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0887" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0888" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0889" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0890" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0906" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0566" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0568" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0569" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0570" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0571" has
finished
2024-08-13 16:41:47.436 AEST [11735] LOG: logical replication
synchronization for subscription "sub3", sequence "seq_0582" has
finished
...

Is there a way to refresh sequences in a more natural (e.g.
alphabetical) order to make these logs more readable?

I felt this is ok, no need to order it as it can easily be done using
some scripts if required from logs.

The rest of the issues were fixed, the v20240813 version patch
attached at [1]/messages/by-id/CALDaNm1Nr_n9SBB52L8A10Txyb4nqGJWfHUapwzM5BopvjMhjA@mail.gmail.com has the changes for the same.
[1]: /messages/by-id/CALDaNm1Nr_n9SBB52L8A10Txyb4nqGJWfHUapwzM5BopvjMhjA@mail.gmail.com

Regards,
Vignesh

#151Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#149)
Re: Logical Replication of sequences

On Tue, Aug 13, 2024 at 10:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 13 Aug 2024 at 09:19, Peter Smith <smithpb2250@gmail.com> wrote:

3.1. GENERAL

Hmm. I am guessing this was provided as a separate patch to aid review
by showing that existing functions are moved? OTOH you can't really
judge this patch properly without already knowing details of what will
come next in the sequencesync. i.e. As a *standalone* patch without
the sequencesync.c the refactoring doesn't make much sense.

Maybe it is OK later to combine patches 0003 and 0004. Alternatively,
keep this patch separated but give greater emphasis in the comment
header to say this patch only exists separately in order to help the
review.

I have kept this patch only to show that this patch as such has no
code changes. If we move this to the next patch it will be difficult
for reviewers to know which is new code and which is old code. During
commit we can merge this with the next one. I felt it is better to add
it in the commit message instead of comment header so updated the
commit message.

Yes, I wrote "comment header" but it was a typo; I meant "commit
header". What you did looks good now. Thanks.

~

3.4. function names

With the re-shuffling that this patch does, and changing several from
static to not-static, should the function names remain as they are?
They look random to me.
- finish_sync_worker(void)
- invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
- FetchTableStates(bool *started_tx)
- process_syncing_tables(XLogRecPtr current_lsn)

I think using a consistent naming convention would be better. e.g.
SyncFinishWorker
SyncInvalidateTableStates
SyncFetchTableStates
SyncProcessTables

One advantage with keeping the existing names the same wherever
possible will help while merging the changes to back-branches. So I'm
not making this change.

According to my understanding, the logical replication code tries to
maintain name conventions for static functions (snake_case) and for
non-static functions (CamelCase) as an aid for code readability. I
think we should either do our best to abide by those conventions, or
we might as well just forget them and have a naming free-for-all.
Since the new syncutils.c module is being introduced by this patch, my
guess is that any future merging to back-branches will be affected
regardless. IMO this is an ideal opportunity to try to nudge the
function names in the right direction. YMMV.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#152Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#149)
1 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh, Here are my review comments for the latest patchset:

Patch v20240813-0001. No comments
Patch v20240813-0002. No comments
Patch v20240813-0003. No comments
Patch v20240813-0004. See below
Patch v20240813-0005. No comments

//////

Patch v20240813-0004

======
src/backend/catalog/pg_subscription.

GetSubscriptionRelations:
nit - modify a condition for readability

======
src/backend/commands/subscriptioncmds.c

fetch_sequence_list:
nit - changed the WARNING message. /parameters differ
between.../parameters differ for.../ (FYI, Chat-GPT agrees that 2nd
way is more correct)
nit - other minor changes to the message and hint

======
.../replication/logical/sequencesync.c

1. LogicalRepSyncSequences

+ ereport(DEBUG1,
+ errmsg("logical replication synchronization for subscription \"%s\",
sequence \"%s\" has finished",
+    get_subscription_name(subid, false), get_rel_name(done_seq->relid)));

DEBUG logs should use errmsg_internal. (fixed also nitpicks attachment).

~

nit - minor change to the log message counting the batched sequences

~~~

process_syncing_sequences_for_apply:
nit - /sequence sync worker/seqeuencesync worker/

======
src/backend/utils/misc/guc_tables.c

nit - /max workers/maximum number of workers/ (for consistency because
all other GUCs are verbose like this; nothing just says "max".)

======
src/test/subscription/t/034_sequences.pl

nit - adjust the expected WARNING message (which was modified above)

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240814_SEQ_0004.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240814_SEQ_0004.txtDownload
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index d938e57..af2bfe1 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -565,9 +565,11 @@ GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
 		relkind = get_rel_relkind(subrel->srrelid);
 
 		/* Skip sequences if they were not requested */
-		if ((relkind == RELKIND_SEQUENCE && !get_sequences) ||
-			/* Skip tables if they were not requested */
-			(relkind != RELKIND_SEQUENCE && !get_tables))
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
 			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4166210..7d0be40 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -2577,9 +2577,9 @@ fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)
 		 */
 		ereport(WARNING,
 				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				errmsg("parameters differ between the remote and local sequences %s for subscription \"%s\"",
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
 					   warning_sequences->data, subname),
-				errhint("Alter/Re-create the sequence using the same parameter as in remote."));
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
 		pfree(warning_sequences);
 	}
 
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index aaedba9..e2e0421 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -342,12 +342,12 @@ LogicalRepSyncSequences(void)
 				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
 
 				ereport(DEBUG1,
-						errmsg("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
 							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
 			}
 
 			ereport(LOG,
-					errmsg("logical replication synchronized %d sequences of %d sequences for subscription \"%s\" ",
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
 						   curr_seq, seq_count, get_subscription_name(subid, false)));
 
 			/* Commit this batch, and prepare for next batch. */
@@ -477,7 +477,7 @@ process_syncing_sequences_for_apply(void)
 		LWLockRelease(LogicalRepWorkerLock);
 
 		/*
-		 * If there are free sync worker slot(s), start a new sequence sync
+		 * If there are free sync worker slot(s), start a new sequencesync
 		 * worker, and break from the loop.
 		 */
 		if (nsyncworkers < max_sync_workers_per_subscription)
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
index 999561f..38fd7a3 100644
--- a/src/test/subscription/t/034_sequences.pl
+++ b/src/test/subscription/t/034_sequences.pl
@@ -177,7 +177,7 @@ $node_subscriber->safe_psql(
         ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES");
 like(
 	$stderr,
-	qr/WARNING: ( [A-Z0-9]+:)? parameters differ between the remote and local sequences "public.regress_s4" for subscription "regress_seq_sub"/,
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
 	"Refresh publication sequences should throw a warning if the sequence definition is not the same"
 );
 
#153vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#152)
5 attachment(s)
Re: Logical Replication of sequences

On Wed, 14 Aug 2024 at 08:39, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are my review comments for the latest patchset:

Patch v20240813-0001. No comments
Patch v20240813-0002. No comments
Patch v20240813-0003. No comments
Patch v20240813-0004. See below
Patch v20240813-0005. No comments

//////

Patch v20240813-0004

The comments have been addressed, and the patch also resolves a
limitation where sequence parameter changes between creating or
altering a subscription and sequence synchronization worker syncing
were not detected and reported to the user. This issue is now handled
by retrieving both the sequence value and its properties in a single
SELECT statement. The corresponding documentation also was updated.

The attached v20240814 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20240814-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240814-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From d2612e9201ee6f59492d0794b548c803fe166c9d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240814 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 705 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..64214ba8d5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 0602398a54..acfac67f8c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -137,7 +137,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -997,6 +998,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1019,6 +1056,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 6ea709988e..5ba58fff78 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1494,7 +1498,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1912,12 +1916,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c3f25582c3..21a7f67256 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10537,7 +10543,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10557,13 +10568,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10675,6 +10686,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19386,6 +19419,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 2f1b6abbfa..4b402a6fdb 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 30b6371134..6c573a12a1 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -723,10 +795,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -910,10 +982,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1118,10 +1190,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1159,10 +1231,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1240,10 +1312,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1253,20 +1325,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1282,19 +1354,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1308,44 +1380,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1379,10 +1451,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1390,20 +1462,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,10 +1483,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1423,10 +1495,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1435,10 +1507,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1446,10 +1518,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1457,10 +1529,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1468,29 +1540,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1499,10 +1571,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1593,18 +1665,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1614,20 +1686,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 479d4f3264..ac77fe4516 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..0ce48da963 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240814-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240814-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 50b63b68e2e070bd285932143153b252845cdedd Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 13 Aug 2024 11:44:51 +0530
Subject: [PATCH v20240814 4/5] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  62 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 296 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 625 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 112 +++-
 src/backend/replication/logical/tablesync.c   |  40 +-
 src/backend/replication/logical/worker.c      |  74 ++-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   6 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   5 +-
 src/include/replication/worker_internal.h     |  33 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 ++++++
 27 files changed, 1455 insertions(+), 189 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index acfac67f8c..980e5574a4 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1292,3 +1292,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..af2bfe1364 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,18 +460,19 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +511,21 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -507,6 +534,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -514,7 +544,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -529,8 +559,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..3c861604e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1896,6 +1898,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index d124bfe55c..9fff2880a7 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication,
+	 * creating this origin is unnecessary at this point. It can be created
+	 * later during the ALTER SUBSCRIPTION ... REFRESH command, if the
+	 * publication is updated to include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables and
+			 * sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, stmt->subname,
+														publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +798,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is not necessary at the moment. It can be
+			 * created during the ALTER SUBSCRIPTION ... REFRESH command if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +826,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +865,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +926,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +950,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->name,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +979,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1004,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,7 +1021,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 rv->schemaname, rv->relname, sub->name)));
 			}
 		}
@@ -968,11 +1039,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,38 +1085,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
 										 sub->name)));
@@ -1039,6 +1140,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1544,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1585,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1604,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1660,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1902,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1929,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2287,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2465,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive list of sequences from the publisher: %s",
+						res->err)));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21a7f67256..5b14393015 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10817,11 +10817,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b83967cda3..86bc9d60a6 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 3964a30109..99d248dd01 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..cbe7c814ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 27a0e30ab7..c3c836b88b 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..8211121fa8
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,625 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List *sequence_states_not_ready = NIL;
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if there are discrepancies between the sequence parameters in
+ *   the publisher and subscriber.
+ * - FALSE if the parameters match.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[10] = {INT8OID, INT8OID, BOOLOID, LSNOID, OIDOID,
+								INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmax;
+	int64		seqmin;
+	bool		seqcycle;
+	bool		seq_not_match = false;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmax, seqmin, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 10, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errmsg("could not receive sequence list from the publisher: %s",
+						res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, 5, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, 8, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, 9, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, 10, &isnull));
+	Assert(!isnull);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||
+		seqform->seqmax != seqmax || seqform->seqstart != seqstart ||
+		seqform->seqincrement != seqincrement || seqform->seqcycle != seqcycle)
+		seq_not_match = true;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_not_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
+					 "  FROM pg_catalog.pg_class c"
+					 "  INNER JOIN pg_catalog.pg_namespace n"
+					 "        ON (c.relnamespace = n.oid)"
+					 " WHERE n.nspname = %s"
+					 "   AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+						nspname, relname, res->err)));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_OBJECT),
+				 errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname)));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = fetch_remote_sequence_data(conn, relid, remoteid,
+													nspname, relname,
+													&seq_log_cnt, &seq_is_called,
+													&seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_sequence_mismatch
+ *
+ * Records details of sequence mismatches as a warning.
+ */
+static void
+report_sequence_mismatch(StringInfo warning_sequences)
+{
+	if (warning_sequences->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					warning_sequences->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+		resetStringInfo(warning_sequences);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the warning_sequences string.
+ */
+static void
+append_mismatched_sequences(StringInfo warning_sequences, Relation	rel)
+{
+	if (warning_sequences->len)
+		appendStringInfoString(warning_sequences, ", ");
+
+	appendStringInfo(warning_sequences, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(rel)),
+					 RelationGetRelationName(rel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	warning_sequences = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not connect to the publisher: %s", err)));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(warning_sequences, sequence_rel);
+
+			report_sequence_mismatch(warning_sequences);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(warning_sequences, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* Obtain the starting index of the current batch. */
+			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+
+			/* LOG all the sequences synchronized during current batch. */
+			for (; i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_sequence_mismatch(warning_sequences);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+process_syncing_sequences_for_apply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 4bbc4814a4..4f6e66b70b 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -24,21 +24,25 @@
 
 typedef enum
 {
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
+	SYNC_RELATION_STATE_NEEDS_REBUILD,
+	SYNC_RELATION_STATE_REBUILD_STARTED,
+	SYNC_RELATION_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 extern List *table_states_not_ready;
+extern List *sequence_states_not_ready;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -53,15 +57,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -70,47 +83,55 @@ finish_sync_worker(void)
  * Callback from syscache invalidation.
  */
 void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 }
 
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchTableStates(bool *started_tx)
+FetchRelationStates(void)
 {
-	static bool has_subrels = false;
-
-	*started_tx = false;
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
+	static bool has_subtables = false;
+	bool		started_tx = false;
 
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_RELATION_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_RELATION_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -118,19 +139,23 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
 		/*
 		 * Does the subscription have tables?
 		 *
-		 * If there were not-READY relations found then we know it does. But
+		 * If there were not-READY tables found then we know it does. But
 		 * if table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
 
 		/*
 		 * If the subscription relation cache has been invalidated since we
@@ -139,18 +164,26 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_RELATION_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATION_STATE_VALID;
 	}
 
-	return has_subrels;
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	return has_subtables;
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
-process_syncing_tables(XLogRecPtr current_lsn)
+process_syncing_relations(XLogRecPtr current_lsn)
 {
 	switch (MyLogicalRepWorker->type)
 	{
@@ -168,7 +201,20 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8776fe4e0f..e26a877d75 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,7 +465,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -1236,7 +1237,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1554,7 +1555,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1562,7 +1563,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1576,17 +1577,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 245e9be6f2..7ff0d2c4dc 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,8 +1030,11 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1147,8 +1155,11 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1203,8 +1214,11 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1269,8 +1283,11 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1404,8 +1421,11 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2246,8 +2266,11 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3644,8 +3667,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
+			process_syncing_relations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4554,8 +4580,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4634,6 +4660,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4642,14 +4672,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4679,8 +4712,11 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 636780673b..a05a944313 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3241,7 +3241,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 5ede8442b4..c9fd1fa26b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12015,6 +12015,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,9 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern bool HasSubscriptionTables(Oid subid);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..479407abf7 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,10 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
+
+extern void process_syncing_sequences_for_apply(void);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index fe63303439..c57afbbcb0 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,27 +250,30 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() finish_sync_worker(void);
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
-extern bool FetchTableStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_relations(XLogRecPtr current_lsn);
 extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
 extern void process_syncing_tables_for_apply(XLogRecPtr current_lsn);
 extern void process_syncing_sequences_for_apply(void);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void invalidate_syncing_relation_states(Datum arg, int cacheid,
+											   uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -331,15 +338,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..d74b9a8259 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..0d89651697
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for paraemters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.34.1

v20240814-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240814-0005-Documentation-for-sequence-synchronization.patchDownload
From 5386c6c045d56a8613c1a031cfabd7f2757088c4 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240814 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..28ca21a772 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a1a1d58a43..733570dd99 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5230,10 +5230,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..ed1fb93cbf 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1677,16 +1872,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -1999,8 +2196,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2014,7 +2211,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..1b1c9994e0 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 634a4c0fab..4261637af7 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20240814-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240814-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 8b15bb003e637bb969bcc4831a314b673a289e1d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240814 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index cdde647513..22d3dcd79f 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19616,6 +19616,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..5ede8442b4 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240814-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20240814-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From c586e879269b5f931262cad44a24eefce33d868a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20240814 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/replication/logical/Makefile    |   1 +
 src/backend/replication/logical/meson.build |   1 +
 src/backend/replication/logical/syncutils.c | 178 ++++++++++++++++++++
 src/backend/replication/logical/tablesync.c | 161 +-----------------
 src/include/replication/worker_internal.h   |   5 +
 5 files changed, 188 insertions(+), 158 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..3964a30109 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -27,6 +27,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..27a0e30ab7 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -13,6 +13,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..4bbc4814a4
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,178 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_TABLE_STATE_NEEDS_REBUILD,
+	SYNC_TABLE_STATE_REBUILD_STARTED,
+	SYNC_TABLE_STATE_VALID,
+} SyncingTablesState;
+
+static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+extern List *table_states_not_ready;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+finish_sync_worker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+{
+	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchTableStates(bool *started_tx)
+{
+	static bool has_subrels = false;
+
+	*started_tx = false;
+
+	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch all non-ready tables. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY relations found then we know it does. But
+		 * if table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subrels = (table_states_not_ready != NIL) ||
+			HasSubscriptionRelations(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			table_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	return has_subrels;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+process_syncing_tables(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			process_syncing_tables_for_sync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			process_syncing_tables_for_apply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..8776fe4e0f 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,7 +238,7 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
+void
 process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
@@ -414,7 +361,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
+void
 process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1561,77 +1477,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..fe63303439 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -250,16 +250,21 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() finish_sync_worker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
+extern bool FetchTableStates(bool *started_tx);
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
+extern void process_syncing_tables_for_apply(XLogRecPtr current_lsn);
+extern void process_syncing_sequences_for_apply(void);
 extern void invalidate_syncing_table_states(Datum arg, int cacheid,
 											uint32 hashvalue);
 
-- 
2.34.1

#154Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#153)
1 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh, I have reviewed your latest patchset:

v20240814-0001. No comments
v20240814-0002. No comments
v20240814-0003. No comments
v20240814-0004. See below
v20240814-0005. No comments

//////

v20240814-0004.

======
src/backend/commands/subscriptioncmds.c

CreateSubscription:
nit - XXX comments

AlterSubscription_refresh:
nit - unnecessary parens in ereport

AlterSubscription:
nit - unnecessary parens in ereport

fetch_sequence_list:
nit - unnecessary parens in ereport

======
.../replication/logical/sequencesync.c

1. fetch_remote_sequence_data

+ * Returns:
+ * - TRUE if there are discrepancies between the sequence parameters in
+ *   the publisher and subscriber.
+ * - FALSE if the parameters match.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+    char *nspname, char *relname, int64 *log_cnt,
+    bool *is_called, XLogRecPtr *page_lsn,
+    int64 *last_value)

IMO it is more natural to return TRUE for good results and FALSE for
bad ones. (FYI, I have implemented this reversal in the nitpicks
attachment).

~

nit - swapped columns seqmin and seqmax in the SQL to fetch them in
the natural order
nit - unnecessary parens in ereport

~~~

copy_sequence:
nit - update function comment to document the output parameter
nit - Assert that *sequence_mismatch is false on entry to this function
nit - tweak wrapping and add \n in the SQL
nit - unnecessary parens in ereport

report_sequence_mismatch:
nit - modify function comment
nit - function name changed
/report_sequence_mismatch/report_mismatched_sequences/ (now plural
(and more like the other one)

append_mismatched_sequences:
nit - param name /rel/seqrel/

~~~

2. LogicalRepSyncSequences:
+ Relation sequence_rel;
+ XLogRecPtr sequence_lsn;
+ bool sequence_mismatch;

The 'sequence_mismatch' variable must be initialized false, otherwise
we cannot trust it gets assigned.

~

LogicalRepSyncSequences:
nit - unnecessary parens in ereport
nit - move the for-loop variable declaration
nit - remove a blank line

process_syncing_sequences_for_apply:
nit - variable declaration indent

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240815_SEQ_0004.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240815_SEQ_0004.txtDownload
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 9fff288..22115bd 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -726,10 +726,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
 	/*
-	 * XXX: If the subscription is for a sequence-only publication,
-	 * creating this origin is unnecessary at this point. It can be created
-	 * later during the ALTER SUBSCRIPTION ... REFRESH command, if the
-	 * publication is updated to include tables or tables in schemas.
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
 	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
@@ -800,9 +800,9 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * export it.
 			 *
 			 * XXX: If the subscription is for a sequence-only publication,
-			 * creating this slot is not necessary at the moment. It can be
-			 * created during the ALTER SUBSCRIPTION ... REFRESH command if the
-			 * publication is updated to include tables or tables in schema.
+			 * creating this slot. It can be created later during the ALTER
+			 * SUBSCRIPTION ... REFRESH command, if the publication is updated
+			 * to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -1021,9 +1021,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
-										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -1125,11 +1125,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
 										 relkind == RELKIND_SEQUENCE ? "sequence" : "table",
 										 get_namespace_name(get_rel_namespace(relid)),
 										 get_rel_name(relid),
-										 sub->name)));
+										 sub->name));
 			}
 		}
 
@@ -1615,8 +1615,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
-							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions")));
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
@@ -2494,8 +2494,8 @@ fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)
 
 	if (res->status != WALRCV_OK_TUPLES)
 		ereport(ERROR,
-				(errmsg("could not receive list of sequences from the publisher: %s",
-						res->err)));
+				errmsg("could not receive list of sequences from the publisher: %s",
+						res->err));
 
 	/* Process sequences. */
 	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 8211121..1f45564 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -82,9 +82,8 @@ List *sequence_states_not_ready = NIL;
  * - last_value: The last value of the sequence.
  *
  * Returns:
- * - TRUE if there are discrepancies between the sequence parameters in
- *   the publisher and subscriber.
- * - FALSE if the parameters match.
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
  */
 static bool
 fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
@@ -101,17 +100,17 @@ fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
 	Oid			seqtypid;
 	int64		seqstart;
 	int64		seqincrement;
-	int64		seqmax;
 	int64		seqmin;
+	int64		seqmax;
 	bool		seqcycle;
-	bool		seq_not_match = false;
+	bool		seq_params_match;
 	HeapTuple	tup;
 	Form_pg_sequence seqform;
 
 	initStringInfo(&cmd);
 	appendStringInfo(&cmd,
 					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
-					 "seqtypid, seqstart, seqincrement, seqmax, seqmin, seqcycle\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
 					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
 					 remoteid, remoteid);
 
@@ -120,16 +119,16 @@ fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
 
 	if (res->status != WALRCV_OK_TUPLES)
 		ereport(ERROR,
-				(errmsg("could not receive sequence list from the publisher: %s",
-						res->err)));
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
 
 	/* Process the sequence. */
 	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
 	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
 		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_OBJECT),
-				 errmsg("sequence \"%s.%s\" not found on publisher",
-						nspname, relname)));
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+						nspname, relname));
 
 	*last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
 	Assert(!isnull);
@@ -152,10 +151,10 @@ fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
 	seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
 	Assert(!isnull);
 
-	seqmax = DatumGetInt64(slot_getattr(slot, 8, &isnull));
+	seqmin = DatumGetInt64(slot_getattr(slot, 8, &isnull));
 	Assert(!isnull);
 
-	seqmin = DatumGetInt64(slot_getattr(slot, 9, &isnull));
+	seqmax = DatumGetInt64(slot_getattr(slot, 9, &isnull));
 	Assert(!isnull);
 
 	seqcycle = DatumGetBool(slot_getattr(slot, 10, &isnull));
@@ -169,16 +168,17 @@ fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
 
 	seqform = (Form_pg_sequence) GETSTRUCT(tup);
 
-	if (seqform->seqtypid != seqtypid || seqform->seqmin != seqmin ||
-		seqform->seqmax != seqmax || seqform->seqstart != seqstart ||
-		seqform->seqincrement != seqincrement || seqform->seqcycle != seqcycle)
-		seq_not_match = true;
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
 
 	ReleaseSysCache(tup);
 	ExecDropSingleTupleTableSlot(slot);
 	walrcv_clear_result(res);
 
-	return seq_not_match;
+	return seq_params_match;
 }
 
 /*
@@ -187,6 +187,9 @@ fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
  * Fetch the sequence value from the publisher and set the subscriber sequence
  * with the same value. Caller is responsible for locking the local
  * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
  */
 static XLogRecPtr
 copy_sequence(WalReceiverConn *conn, Relation rel,
@@ -207,14 +210,15 @@ copy_sequence(WalReceiverConn *conn, Relation rel,
 	char	   *relname = RelationGetRelationName(rel);
 	Oid			relid = RelationGetRelid(rel);
 
+	Assert(!*sequence_mismatch);
+
 	/* Fetch Oid. */
 	initStringInfo(&cmd);
-	appendStringInfo(&cmd, "SELECT c.oid, c.relkind"
-					 "  FROM pg_catalog.pg_class c"
-					 "  INNER JOIN pg_catalog.pg_namespace n"
-					 "        ON (c.relnamespace = n.oid)"
-					 " WHERE n.nspname = %s"
-					 "   AND c.relname = %s",
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
 					 quote_literal_cstr(nspname),
 					 quote_literal_cstr(relname));
 
@@ -222,16 +226,16 @@ copy_sequence(WalReceiverConn *conn, Relation rel,
 					  lengthof(tableRow), tableRow);
 	if (res->status != WALRCV_OK_TUPLES)
 		ereport(ERROR,
-				(errcode(ERRCODE_CONNECTION_FAILURE),
-				 errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
-						nspname, relname, res->err)));
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
 
 	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
 	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
 		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_OBJECT),
-				 errmsg("sequence \"%s.%s\" not found on publisher",
-						nspname, relname)));
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
 
 	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
 	Assert(!isnull);
@@ -242,7 +246,7 @@ copy_sequence(WalReceiverConn *conn, Relation rel,
 	ExecDropSingleTupleTableSlot(slot);
 	walrcv_clear_result(res);
 
-	*sequence_mismatch = fetch_remote_sequence_data(conn, relid, remoteid,
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
 													nspname, relname,
 													&seq_log_cnt, &seq_is_called,
 													&seq_page_lsn, &seq_last_value);
@@ -255,12 +259,12 @@ copy_sequence(WalReceiverConn *conn, Relation rel,
 }
 
 /*
- * report_sequence_mismatch
+ * report_mismatched_sequences
  *
- * Records details of sequence mismatches as a warning.
+ * Report any sequence mismatches as a single warning log.
  */
 static void
-report_sequence_mismatch(StringInfo warning_sequences)
+report_mismatched_sequences(StringInfo warning_sequences)
 {
 	if (warning_sequences->len)
 	{
@@ -269,6 +273,7 @@ report_sequence_mismatch(StringInfo warning_sequences)
 				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
 					warning_sequences->data, MySubscription->name),
 				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
 		resetStringInfo(warning_sequences);
 	}
 }
@@ -280,14 +285,14 @@ report_sequence_mismatch(StringInfo warning_sequences)
  * and subscriber to the warning_sequences string.
  */
 static void
-append_mismatched_sequences(StringInfo warning_sequences, Relation	rel)
+append_mismatched_sequences(StringInfo warning_sequences, Relation seqrel)
 {
 	if (warning_sequences->len)
 		appendStringInfoString(warning_sequences, ", ");
 
 	appendStringInfo(warning_sequences, "\"%s.%s\"",
-					 get_namespace_name(RelationGetNamespace(rel)),
-					 RelationGetRelationName(rel));
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
 }
 
 /*
@@ -355,15 +360,15 @@ LogicalRepSyncSequences(void)
 					   slotname, &err);
 	if (LogRepWorkerWalRcvConn == NULL)
 		ereport(ERROR,
-				(errcode(ERRCODE_CONNECTION_FAILURE),
-				 errmsg("could not connect to the publisher: %s", err)));
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
 
 	seq_count = list_length(sequences_not_synced);
 	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
 	{
 		Relation	sequence_rel;
 		XLogRecPtr	sequence_lsn;
-		bool		sequence_mismatch;
+		bool		sequence_mismatch = false;
 
 		CHECK_FOR_INTERRUPTS();
 
@@ -422,7 +427,7 @@ LogicalRepSyncSequences(void)
 			if (sequence_mismatch)
 				append_mismatched_sequences(warning_sequences, sequence_rel);
 
-			report_sequence_mismatch(warning_sequences);
+			report_mismatched_sequences(warning_sequences);
 			PG_RE_THROW();
 		}
 		PG_END_TRY();
@@ -444,11 +449,9 @@ LogicalRepSyncSequences(void)
 		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
 			curr_seq == seq_count)
 		{
-			/* Obtain the starting index of the current batch. */
-			int			i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
-
 			/* LOG all the sequences synchronized during current batch. */
-			for (; i < curr_seq; i++)
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
 			{
 				SubscriptionRelState *done_seq;
 
@@ -459,7 +462,7 @@ LogicalRepSyncSequences(void)
 							   get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
 			}
 
-			report_sequence_mismatch(warning_sequences);
+			report_mismatched_sequences(warning_sequences);
 
 			ereport(LOG,
 					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
@@ -469,7 +472,6 @@ LogicalRepSyncSequences(void)
 			CommitTransactionCommand();
 			start_txn = true;
 		}
-
 	}
 
 	list_free_deep(sequences_not_synced);
@@ -554,7 +556,7 @@ process_syncing_sequences_for_apply(void)
 	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
 	{
 		LogicalRepWorker *syncworker;
-		int			nsyncworkers;
+		int				  nsyncworkers;
 
 		if (!started_tx)
 		{
#155vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#154)
5 attachment(s)
Re: Logical Replication of sequences

On Thu, 15 Aug 2024 at 11:57, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, I have reviewed your latest patchset:

v20240814-0001. No comments
v20240814-0002. No comments
v20240814-0003. No comments
v20240814-0004. See below
v20240814-0005. No comments

//////

v20240814-0004.

These comments are addressed in the v20240815 version patch attached.

Regards,
Vignesh

Attachments:

v20240815-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240815-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From da26e38faa4a9dc24b0d04c2237aef7b26709dad Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240815 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 5dd95d73a1..224c66d882 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19616,6 +19616,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..5ede8442b4 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240815-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240815-0005-Documentation-for-sequence-synchronization.patchDownload
From 498e903ff6d9b75c9ddfe76f385ce7a5f947716c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240815 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..28ca21a772 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 2937384b00..4aad02e1ee 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5230,10 +5230,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..ed1fb93cbf 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1677,16 +1872,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -1999,8 +2196,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2014,7 +2211,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..1b1c9994e0 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 634a4c0fab..4261637af7 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20240815-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240815-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 8b564c8399e3215852dbb46921a2f6ae3d56f104 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 13 Aug 2024 11:44:51 +0530
Subject: [PATCH v20240815 4/5] Enhance sequence synchronization during
 subscription management

This commit introduces sequence synchronization:
1) During subscription creation:
   - The subscriber retrieves sequences associated with publications.
   - Sequences  are added in 'init' state to pg_subscription_rel table.
   - A new sequence synchronization worker handles synchronization in
     batches of 100 sequences:
     a) Retrieves sequence values using pg_sequence_state from the publisher.
     b) Sets sequence values accordingly.
     c) Updates sequence state to 'READY'.
     d) Commits batches of 100 synchronized sequences.

2) Refreshing sequences:
   - Refreshing sequences occurs with
        ALTER SUBSCRIPTION ... REFRESH PUBLICATION (no syntax change).
   - Stale sequences are removed from pg_subscription_rel.
   - Newly added sequences in the publisher are added in 'init'
     state to pg_subscription_rel.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
   - Sequence synchronization occurs for newly added sequences only.

3) Introduce new command for refreshing all sequences:
   - ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.
   - Removes stale sequences and adds newly added sequences from
     the publisher to pg_subscription_rel.
   - Resets all sequences in pg_subscription_rel to 'init' state.
   - Initiates sequence synchronization for all sequences by sequence
     sync worker as listed in subscription creation process.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  62 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 627 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 114 +++-
 src/backend/replication/logical/tablesync.c   |  42 +-
 src/backend/replication/logical/worker.c      |  74 ++-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   6 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   5 +-
 src/include/replication/worker_internal.h     |  33 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 ++++++
 src/tools/pgindent/typedefs.list              |   2 +-
 28 files changed, 1464 insertions(+), 196 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 239799f987..6b961a286b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1274,3 +1274,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..af2bfe1364 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,18 +460,19 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +511,21 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -507,6 +534,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -514,7 +544,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -529,8 +559,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..3c861604e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1896,6 +1898,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index b925c464ae..597407f873 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, stmt->subname,
+														publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +798,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot. It can be created later during the ALTER
+			 * SUBSCRIPTION ... REFRESH command, if the publication is updated
+			 * to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +826,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +865,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +926,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +950,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->name,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +979,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1004,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,8 +1021,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -968,11 +1039,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,41 +1085,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1039,6 +1140,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1544,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1585,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1604,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1660,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1902,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1929,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2287,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2465,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21a7f67256..5b14393015 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10817,11 +10817,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b83967cda3..86bc9d60a6 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 3964a30109..99d248dd01 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..cbe7c814ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 27a0e30ab7..c3c836b88b 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..1aa5ab8822
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,627 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[10] = {INT8OID, INT8OID, BOOLOID, LSNOID, OIDOID,
+	INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 10, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, 5, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, 8, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, 9, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, 10, &isnull));
+	Assert(!isnull);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo warning_sequences)
+{
+	if (warning_sequences->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   warning_sequences->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(warning_sequences);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the warning_sequences string.
+ */
+static void
+append_mismatched_sequences(StringInfo warning_sequences, Relation seqrel)
+{
+	if (warning_sequences->len)
+		appendStringInfoString(warning_sequences, ", ");
+
+	appendStringInfo(warning_sequences, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	warning_sequences = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(warning_sequences, sequence_rel);
+
+			report_mismatched_sequences(warning_sequences);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(warning_sequences, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(warning_sequences);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+process_syncing_sequences_for_apply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 4bbc4814a4..51f7f4c33e 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -24,21 +24,25 @@
 
 typedef enum
 {
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
+	SYNC_RELATION_STATE_NEEDS_REBUILD,
+	SYNC_RELATION_STATE_REBUILD_STARTED,
+	SYNC_RELATION_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 extern List *table_states_not_ready;
+extern List *sequence_states_not_ready;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -53,15 +57,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -70,47 +83,55 @@ finish_sync_worker(void)
  * Callback from syscache invalidation.
  */
 void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 }
 
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchTableStates(bool *started_tx)
+FetchRelationStates(void)
 {
-	static bool has_subrels = false;
-
-	*started_tx = false;
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
+	static bool has_subtables = false;
+	bool		started_tx = false;
 
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_RELATION_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_RELATION_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -118,19 +139,23 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
 		/*
 		 * Does the subscription have tables?
 		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
 
 		/*
 		 * If the subscription relation cache has been invalidated since we
@@ -139,18 +164,26 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_RELATION_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATION_STATE_VALID;
 	}
 
-	return has_subrels;
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	return has_subtables;
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
-process_syncing_tables(XLogRecPtr current_lsn)
+process_syncing_relations(XLogRecPtr current_lsn)
 {
 	switch (MyLogicalRepWorker->type)
 	{
@@ -168,7 +201,20 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8776fe4e0f..079d4f0a3a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -122,7 +122,7 @@
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-List *table_states_not_ready = NIL;
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
@@ -161,7 +161,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,7 +465,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -1236,7 +1237,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1554,7 +1555,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1562,7 +1563,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1576,17 +1577,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 245e9be6f2..7ff0d2c4dc 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,8 +1030,11 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1147,8 +1155,11 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1203,8 +1214,11 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1269,8 +1283,11 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1404,8 +1421,11 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2246,8 +2266,11 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3644,8 +3667,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
+			process_syncing_relations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4554,8 +4580,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4634,6 +4660,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4642,14 +4672,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4679,8 +4712,11 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 79ecaa4c4c..7d5c4e0b22 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3239,7 +3239,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 5ede8442b4..c9fd1fa26b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12015,6 +12015,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,9 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern bool HasSubscriptionTables(Oid subid);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..479407abf7 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,10 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
+
+extern void process_syncing_sequences_for_apply(void);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index fe63303439..c57afbbcb0 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,27 +250,30 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() finish_sync_worker(void);
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
-extern bool FetchTableStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_relations(XLogRecPtr current_lsn);
 extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
 extern void process_syncing_tables_for_apply(XLogRecPtr current_lsn);
 extern void process_syncing_sequences_for_apply(void);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void invalidate_syncing_relation_states(Datum arg, int cacheid,
+											   uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -331,15 +338,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..d74b9a8259 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..0d89651697
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for paraemters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 0ce48da963..6595a46692 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2788,13 +2788,13 @@ SupportRequestSelectivity
 SupportRequestSimplify
 SupportRequestWFuncMonotonic
 Syn
+SyncingRelationsState
 SyncOps
 SyncRepConfigData
 SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

v20240815-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20240815-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From cd321a0e097b7d40db9329cbc4501612806777c9 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20240815 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/replication/logical/Makefile    |   1 +
 src/backend/replication/logical/meson.build |   1 +
 src/backend/replication/logical/syncutils.c | 178 ++++++++++++++++++++
 src/backend/replication/logical/tablesync.c | 161 +-----------------
 src/include/replication/worker_internal.h   |   5 +
 5 files changed, 188 insertions(+), 158 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..3964a30109 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -27,6 +27,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..27a0e30ab7 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -13,6 +13,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..4bbc4814a4
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,178 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_TABLE_STATE_NEEDS_REBUILD,
+	SYNC_TABLE_STATE_REBUILD_STARTED,
+	SYNC_TABLE_STATE_VALID,
+} SyncingTablesState;
+
+static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+extern List *table_states_not_ready;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+finish_sync_worker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+{
+	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchTableStates(bool *started_tx)
+{
+	static bool has_subrels = false;
+
+	*started_tx = false;
+
+	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch all non-ready tables. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY relations found then we know it does. But
+		 * if table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subrels = (table_states_not_ready != NIL) ||
+			HasSubscriptionRelations(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			table_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	return has_subrels;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+process_syncing_tables(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			process_syncing_tables_for_sync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			process_syncing_tables_for_apply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..8776fe4e0f 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,7 +238,7 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
+void
 process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
@@ -414,7 +361,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
+void
 process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1561,77 +1477,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..fe63303439 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -250,16 +250,21 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() finish_sync_worker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
+extern bool FetchTableStates(bool *started_tx);
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
+extern void process_syncing_tables_for_apply(XLogRecPtr current_lsn);
+extern void process_syncing_sequences_for_apply(void);
 extern void invalidate_syncing_table_states(Datum arg, int cacheid,
 											uint32 hashvalue);
 
-- 
2.34.1

v20240815-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240815-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From f01be8c61f0b872089b0aeee5e05ddd3f2bc7cf6 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240815 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 705 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..64214ba8d5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 7fe5fe2b86..239799f987 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -979,6 +980,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1001,6 +1038,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index d6ffef374e..b081b7249b 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1486,7 +1490,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1904,12 +1908,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c3f25582c3..21a7f67256 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10537,7 +10543,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10557,13 +10568,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10675,6 +10686,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19386,6 +19419,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index d9518a58b0..cb52303248 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 660245ed0c..d40c01e347 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -730,10 +802,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -917,10 +989,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1125,10 +1197,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1166,10 +1238,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1247,10 +1319,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1260,20 +1332,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1289,19 +1361,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1315,44 +1387,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1386,10 +1458,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1397,20 +1469,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1418,10 +1490,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1430,10 +1502,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1442,10 +1514,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1453,10 +1525,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1464,10 +1536,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1475,29 +1547,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1506,10 +1578,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1518,10 +1590,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1600,18 +1672,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1621,20 +1693,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index f68a5b5986..61f98a8b2f 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..0ce48da963 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

#156Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#155)
2 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh. I looked at the latest v20240815* patch set.

I have only the following few comments for patch v20240815-0004, below.

======
Commit message.

Please see the attachment for some suggested updates.

======
src/backend/commands/subscriptioncmds.c

CreateSubscription:
nit - fix wording in one of the XXX comments

======
.../replication/logical/sequencesync.c

report_mismatched_sequences:
nit - param name /warning_sequences/mismatched_seqs/

append_mismatched_sequences:
nit - param name /warning_sequences/mismatched_seqs/

LogicalRepSyncSequences:
nit - var name /warning_sequences/mismatched_seqs/

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240816_seq_0004.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240816_seq_0004.txtDownload
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 534d385..e799a41 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -800,9 +800,9 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * export it.
 			 *
 			 * XXX: If the subscription is for a sequence-only publication,
-			 * creating this slot. It can be created later during the ALTER
-			 * SUBSCRIPTION ... REFRESH command, if the publication is updated
-			 * to include tables or tables in schema.
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 1aa5ab8..934646f 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -264,17 +264,17 @@ copy_sequence(WalReceiverConn *conn, Relation rel,
  * Report any sequence mismatches as a single warning log.
  */
 static void
-report_mismatched_sequences(StringInfo warning_sequences)
+report_mismatched_sequences(StringInfo mismatched_seqs)
 {
-	if (warning_sequences->len)
+	if (mismatched_seqs->len)
 	{
 		ereport(WARNING,
 				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
-					   warning_sequences->data, MySubscription->name),
+					   mismatched_seqs->data, MySubscription->name),
 				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
 
-		resetStringInfo(warning_sequences);
+		resetStringInfo(mismatched_seqs);
 	}
 }
 
@@ -282,15 +282,15 @@ report_mismatched_sequences(StringInfo warning_sequences)
  * append_mismatched_sequences
  *
  * Appends details of sequences that have discrepancies between the publisher
- * and subscriber to the warning_sequences string.
+ * and subscriber to the mismatched_seqs string.
  */
 static void
-append_mismatched_sequences(StringInfo warning_sequences, Relation seqrel)
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
 {
-	if (warning_sequences->len)
-		appendStringInfoString(warning_sequences, ", ");
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
 
-	appendStringInfo(warning_sequences, "\"%s.%s\"",
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
 					 get_namespace_name(RelationGetNamespace(seqrel)),
 					 RelationGetRelationName(seqrel));
 }
@@ -314,7 +314,7 @@ LogicalRepSyncSequences(void)
 	bool		start_txn = true;
 	Oid			subid = MyLogicalRepWorker->subid;
 	MemoryContext oldctx;
-	StringInfo	warning_sequences = makeStringInfo();
+	StringInfo	mismatched_seqs = makeStringInfo();
 
 /*
  * Synchronizing each sequence individually incurs overhead from starting
@@ -425,15 +425,15 @@ LogicalRepSyncSequences(void)
 		PG_CATCH();
 		{
 			if (sequence_mismatch)
-				append_mismatched_sequences(warning_sequences, sequence_rel);
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
 
-			report_mismatched_sequences(warning_sequences);
+			report_mismatched_sequences(mismatched_seqs);
 			PG_RE_THROW();
 		}
 		PG_END_TRY();
 
 		if (sequence_mismatch)
-			append_mismatched_sequences(warning_sequences, sequence_rel);
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
 
 		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
 								   sequence_lsn);
@@ -462,7 +462,7 @@ LogicalRepSyncSequences(void)
 										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
 			}
 
-			report_mismatched_sequences(warning_sequences);
+			report_mismatched_sequences(mismatched_seqs);
 
 			ereport(LOG,
 					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
20240816_PS_COMMIT_MESSAGE_SEQ_0004application/octet-stream; name=20240816_PS_COMMIT_MESSAGE_SEQ_0004Download
#157vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#156)
5 attachment(s)
Re: Logical Replication of sequences

On Fri, 16 Aug 2024 at 10:26, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh. I looked at the latest v20240815* patch set.

I have only the following few comments for patch v20240815-0004, below.

Thanks, these are handled in the v20240816 version patch attached.

Regards,
Vignesh

Attachments:

v20240816-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240816-0005-Documentation-for-sequence-synchronization.patchDownload
From 374929d11f50afc045474de1a26428436643a66a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240816 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..28ca21a772 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 2937384b00..4aad02e1ee 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5230,10 +5230,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..ed1fb93cbf 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1677,16 +1872,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -1999,8 +2196,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2014,7 +2211,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..1b1c9994e0 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 634a4c0fab..4261637af7 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20240816-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240816-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From c0fa403305e366e7ac1cb6c54b658b6d2f977569 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240816 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 5dd95d73a1..224c66d882 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19616,6 +19616,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..5ede8442b4 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240816-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20240816-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 5793f0007dfc20d8b9fdfaf8372945c6bbd47d2a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20240816 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/replication/logical/Makefile    |   1 +
 src/backend/replication/logical/meson.build |   1 +
 src/backend/replication/logical/syncutils.c | 178 ++++++++++++++++++++
 src/backend/replication/logical/tablesync.c | 161 +-----------------
 src/include/replication/worker_internal.h   |   5 +
 5 files changed, 188 insertions(+), 158 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..3964a30109 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -27,6 +27,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..27a0e30ab7 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -13,6 +13,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..4bbc4814a4
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,178 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_TABLE_STATE_NEEDS_REBUILD,
+	SYNC_TABLE_STATE_REBUILD_STARTED,
+	SYNC_TABLE_STATE_VALID,
+} SyncingTablesState;
+
+static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+extern List *table_states_not_ready;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+finish_sync_worker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+{
+	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchTableStates(bool *started_tx)
+{
+	static bool has_subrels = false;
+
+	*started_tx = false;
+
+	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch all non-ready tables. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY relations found then we know it does. But
+		 * if table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subrels = (table_states_not_ready != NIL) ||
+			HasSubscriptionRelations(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			table_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	return has_subrels;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+process_syncing_tables(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			process_syncing_tables_for_sync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			process_syncing_tables_for_apply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..8776fe4e0f 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,7 +238,7 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
+void
 process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
@@ -414,7 +361,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
+void
 process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1561,77 +1477,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..fe63303439 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -250,16 +250,21 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() finish_sync_worker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
+extern bool FetchTableStates(bool *started_tx);
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
+extern void process_syncing_tables_for_apply(XLogRecPtr current_lsn);
+extern void process_syncing_sequences_for_apply(void);
 extern void invalidate_syncing_table_states(Datum arg, int cacheid,
 											uint32 hashvalue);
 
-- 
2.34.1

v20240816-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240816-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 0183eb05f4c095f001aedc8bf1970ed189860d32 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240816 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 705 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..64214ba8d5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 7fe5fe2b86..239799f987 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -979,6 +980,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1001,6 +1038,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index d6ffef374e..b081b7249b 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1486,7 +1490,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1904,12 +1908,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c3f25582c3..21a7f67256 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10537,7 +10543,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10557,13 +10568,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10675,6 +10686,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19386,6 +19419,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index d9518a58b0..cb52303248 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 660245ed0c..d40c01e347 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -730,10 +802,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -917,10 +989,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1125,10 +1197,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1166,10 +1238,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1247,10 +1319,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1260,20 +1332,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1289,19 +1361,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1315,44 +1387,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1386,10 +1458,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1397,20 +1469,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1418,10 +1490,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1430,10 +1502,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1442,10 +1514,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1453,10 +1525,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1464,10 +1536,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1475,29 +1547,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1506,10 +1578,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1518,10 +1590,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1600,18 +1672,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1621,20 +1693,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index f68a5b5986..61f98a8b2f 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..0ce48da963 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240816-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240816-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 16c23186003e58339b55d05e4adb9842f14aa574 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 13 Aug 2024 11:44:51 +0530
Subject: [PATCH v20240816 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  62 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 627 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 114 +++-
 src/backend/replication/logical/tablesync.c   |  42 +-
 src/backend/replication/logical/worker.c      |  74 ++-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   6 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   5 +-
 src/include/replication/worker_internal.h     |  33 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 ++++++
 src/tools/pgindent/typedefs.list              |   2 +-
 28 files changed, 1464 insertions(+), 196 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 239799f987..6b961a286b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1274,3 +1274,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..af2bfe1364 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,18 +460,19 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +511,21 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -507,6 +534,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -514,7 +544,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -529,8 +559,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..3c861604e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1896,6 +1898,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index b925c464ae..bbe4346f27 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, stmt->subname,
+														publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +798,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +826,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +865,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +926,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +950,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->name,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +979,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1004,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,8 +1021,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -968,11 +1039,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,41 +1085,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1039,6 +1140,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1544,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1585,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1604,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1660,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1902,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1929,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2287,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2465,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21a7f67256..5b14393015 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10817,11 +10817,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b83967cda3..86bc9d60a6 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 3964a30109..99d248dd01 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..cbe7c814ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 27a0e30ab7..c3c836b88b 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..934646f66a
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,627 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[10] = {INT8OID, INT8OID, BOOLOID, LSNOID, OIDOID,
+	INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 10, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, 5, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, 8, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, 9, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, 10, &isnull));
+	Assert(!isnull);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+process_syncing_sequences_for_apply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 4bbc4814a4..51f7f4c33e 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -24,21 +24,25 @@
 
 typedef enum
 {
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
+	SYNC_RELATION_STATE_NEEDS_REBUILD,
+	SYNC_RELATION_STATE_REBUILD_STARTED,
+	SYNC_RELATION_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
 extern List *table_states_not_ready;
+extern List *sequence_states_not_ready;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -53,15 +57,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -70,47 +83,55 @@ finish_sync_worker(void)
  * Callback from syscache invalidation.
  */
 void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 }
 
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchTableStates(bool *started_tx)
+FetchRelationStates(void)
 {
-	static bool has_subrels = false;
-
-	*started_tx = false;
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
+	static bool has_subtables = false;
+	bool		started_tx = false;
 
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_RELATION_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_RELATION_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -118,19 +139,23 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
 		/*
 		 * Does the subscription have tables?
 		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
 
 		/*
 		 * If the subscription relation cache has been invalidated since we
@@ -139,18 +164,26 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_RELATION_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATION_STATE_VALID;
 	}
 
-	return has_subrels;
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	return has_subtables;
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
-process_syncing_tables(XLogRecPtr current_lsn)
+process_syncing_relations(XLogRecPtr current_lsn)
 {
 	switch (MyLogicalRepWorker->type)
 	{
@@ -168,7 +201,20 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8776fe4e0f..079d4f0a3a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -122,7 +122,7 @@
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-List *table_states_not_ready = NIL;
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
@@ -161,7 +161,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,7 +465,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -1236,7 +1237,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1554,7 +1555,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1562,7 +1563,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1576,17 +1577,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 245e9be6f2..7ff0d2c4dc 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,8 +1030,11 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1147,8 +1155,11 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1203,8 +1214,11 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1269,8 +1283,11 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1404,8 +1421,11 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2246,8 +2266,11 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3644,8 +3667,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
+			process_syncing_relations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4554,8 +4580,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4634,6 +4660,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4642,14 +4672,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4679,8 +4712,11 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 79ecaa4c4c..7d5c4e0b22 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3239,7 +3239,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 5ede8442b4..c9fd1fa26b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12015,6 +12015,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,9 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern bool HasSubscriptionTables(Oid subid);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..479407abf7 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,10 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
+
+extern void process_syncing_sequences_for_apply(void);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index fe63303439..c57afbbcb0 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,27 +250,30 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() finish_sync_worker(void);
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
-extern bool FetchTableStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_relations(XLogRecPtr current_lsn);
 extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
 extern void process_syncing_tables_for_apply(XLogRecPtr current_lsn);
 extern void process_syncing_sequences_for_apply(void);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void invalidate_syncing_relation_states(Datum arg, int cacheid,
+											   uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -331,15 +338,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..d74b9a8259 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..0d89651697
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for paraemters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 0ce48da963..6595a46692 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2788,13 +2788,13 @@ SupportRequestSelectivity
 SupportRequestSimplify
 SupportRequestWFuncMonotonic
 Syn
+SyncingRelationsState
 SyncOps
 SyncRepConfigData
 SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

#158vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#157)
5 attachment(s)
Re: Logical Replication of sequences

On Fri, 16 Aug 2024 at 11:08, vignesh C <vignesh21@gmail.com> wrote:

On Fri, 16 Aug 2024 at 10:26, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh. I looked at the latest v20240815* patch set.

I have only the following few comments for patch v20240815-0004, below.

Thanks, these are handled in the v20240816 version patch attached.

CFBot reported one warning with the patch, here is an updated patch
for the same.

Regards,
Vignesh

Attachments:

v20240817-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240817-0005-Documentation-for-sequence-synchronization.patchDownload
From 579a10badd695042b20d2bea78e4481165a58ef2 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240817 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..28ca21a772 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 2937384b00..4aad02e1ee 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5230,10 +5230,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..ed1fb93cbf 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1677,16 +1872,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -1999,8 +2196,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2014,7 +2211,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..1b1c9994e0 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 634a4c0fab..4261637af7 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20240817-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240817-0004-Enhance-sequence-synchronization-during-su.patchDownload
From d4433b3656c4ca0a9b6aaed0c7c65b0783e6e8c6 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sat, 17 Aug 2024 20:11:53 +0530
Subject: [PATCH v20240817 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  62 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 ++++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/launcher.c    |  70 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 529 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 240 ++++++--
 src/backend/replication/logical/tablesync.c   |  12 +-
 src/backend/replication/logical/worker.c      |  74 ++-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   6 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  31 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 ++++++
 src/tools/pgindent/typedefs.list              |   2 +-
 28 files changed, 1459 insertions(+), 195 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 239799f987..6b961a286b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1274,3 +1274,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..af2bfe1364 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -457,18 +460,19 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +511,21 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -507,6 +534,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -514,7 +544,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -529,8 +559,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..3c861604e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1896,6 +1898,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index b925c464ae..bbe4346f27 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, stmt->subname,
+														publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +798,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +826,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +865,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +926,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +950,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->name,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +979,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1004,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,8 +1021,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -968,11 +1039,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,41 +1085,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1039,6 +1140,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1544,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1585,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1604,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1660,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1902,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1929,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2287,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2465,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21a7f67256..5b14393015 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10817,11 +10817,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b83967cda3..86bc9d60a6 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 3964a30109..99d248dd01 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..cbe7c814ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 27a0e30ab7..c3c836b88b 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..c8abe757a2
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,529 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[10] = {INT8OID, INT8OID, BOOLOID, LSNOID, OIDOID,
+	INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 10, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, 5, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, 8, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, 9, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, 10, &isnull));
+	Assert(!isnull);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	finish_sync_worker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index b841ab7941..ce87ecec4c 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -26,22 +26,25 @@
 
 typedef enum
 {
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
+	SYNC_RELATION_STATE_NEEDS_REBUILD,
+	SYNC_RELATION_STATE_REBUILD_STARTED,
+	SYNC_RELATION_STATE_VALID,
+} SyncingRelationsState;
 
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+static SyncingRelationsState relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+static List *sequence_states_not_ready = NIL;
+static bool FetchRelationStates(void);
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-finish_sync_worker(void)
+finish_sync_worker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -56,15 +59,24 @@ finish_sync_worker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -73,9 +85,9 @@ finish_sync_worker(void)
  * Callback from syscache invalidation.
  */
 void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+invalidate_syncing_relation_states(Datum arg, int cacheid, uint32 hashvalue)
 {
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+	relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 }
 
 /*
@@ -114,9 +126,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -149,6 +158,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -162,11 +179,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -203,7 +215,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -345,11 +358,110 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
-process_syncing_tables(XLogRecPtr current_lsn)
+process_syncing_relations(XLogRecPtr current_lsn)
 {
 	switch (MyLogicalRepWorker->type)
 	{
@@ -367,7 +479,20 @@ process_syncing_tables(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -379,39 +504,47 @@ process_syncing_tables(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 static bool
-FetchTableStates(bool *started_tx)
+FetchRelationStates(void)
 {
-	static bool has_subrels = false;
-
-	*started_tx = false;
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
+	static bool has_subtables = false;
+	bool		started_tx = false;
 
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	if (relation_states_validity != SYNC_RELATION_STATE_VALID)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
 		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+		relation_states_validity = SYNC_RELATION_STATE_REBUILD_STARTED;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -419,19 +552,23 @@ FetchTableStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
 		/*
 		 * Does the subscription have tables?
 		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
 
 		/*
 		 * If the subscription relation cache has been invalidated since we
@@ -440,11 +577,17 @@ FetchTableStates(bool *started_tx)
 		 * table states marked as stale so that we'll rebuild it again on next
 		 * access. Otherwise, we mark the table states as valid.
 		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
+		if (relation_states_validity == SYNC_RELATION_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATION_STATE_VALID;
 	}
 
-	return has_subrels;
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	return has_subtables;
 }
 
 /*
@@ -458,17 +601,10 @@ FetchTableStates(bool *started_tx)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 17c4d2e76a..347e1e7887 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -159,7 +159,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -206,7 +206,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -332,7 +332,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		finish_sync_worker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -968,7 +968,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			finish_sync_worker(WORKERTYPE_TABLESYNC);	/* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1286,7 +1286,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1294,7 +1294,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	finish_sync_worker(WORKERTYPE_TABLESYNC);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 245e9be6f2..7ff0d2c4dc 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,8 +1030,11 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1147,8 +1155,11 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1203,8 +1214,11 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1269,8 +1283,11 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1404,8 +1421,11 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2246,8 +2266,11 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
+	process_syncing_relations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3644,8 +3667,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
+			process_syncing_relations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4554,8 +4580,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4634,6 +4660,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4642,14 +4672,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4679,8 +4712,11 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  invalidate_syncing_relation_states,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 79ecaa4c4c..7d5c4e0b22 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3239,7 +3239,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 5ede8442b4..c9fd1fa26b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12015,6 +12015,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,9 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern bool HasSubscriptionTables(Oid subid);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 6ff5643132..1c0834bde5 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,14 +250,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() finish_sync_worker(void);
+extern void pg_attribute_noreturn() finish_sync_worker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -261,10 +268,10 @@ extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern bool wait_for_relation_state_change(Oid relid, char expected_state);
-extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_relations(XLogRecPtr current_lsn);
 extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void invalidate_syncing_relation_states(Datum arg, int cacheid,
+											   uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -329,15 +336,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..d74b9a8259 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..0d89651697
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for paraemters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 0ce48da963..6595a46692 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2788,13 +2788,13 @@ SupportRequestSelectivity
 SupportRequestSimplify
 SupportRequestWFuncMonotonic
 Syn
+SyncingRelationsState
 SyncOps
 SyncRepConfigData
 SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

v20240817-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20240817-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 20fd18b40748ebab540f2bce16afca781b1a7be8 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20240817 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/replication/logical/Makefile    |   1 +
 src/backend/replication/logical/meson.build |   1 +
 src/backend/replication/logical/syncutils.c | 478 ++++++++++++++++++++
 src/backend/replication/logical/tablesync.c | 457 +------------------
 src/include/replication/worker_internal.h   |   3 +
 5 files changed, 485 insertions(+), 455 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..3964a30109 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -27,6 +27,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..27a0e30ab7 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -13,6 +13,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..b841ab7941
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,478 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_TABLE_STATE_NEEDS_REBUILD,
+	SYNC_TABLE_STATE_REBUILD_STARTED,
+	SYNC_TABLE_STATE_VALID,
+} SyncingTablesState;
+
+static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+static List *table_states_not_ready = NIL;
+static bool FetchTableStates(bool *started_tx);
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+finish_sync_worker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
+{
+	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Handle table synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription tables that are individually tracked by the
+ * apply process (currently, all that have state other than
+ * SUBREL_STATE_READY) and manage synchronization for them.
+ *
+ * If there are tables that need synchronizing and are not being synchronized
+ * yet, start sync workers for them (if there are free slots for sync
+ * workers).  To prevent starting the sync worker for the same relation at a
+ * high frequency after a failure, we store its last start time with each sync
+ * state info.  We start the sync worker for the same relation after waiting
+ * at least wal_retrieve_retry_interval.
+ *
+ * For tables that are being synchronized already, check if sync workers
+ * either need action from the apply worker or have finished.  This is the
+ * SYNCWAIT to CATCHUP transition.
+ *
+ * If the synchronization position is reached (SYNCDONE), then the table can
+ * be marked as READY and is no longer tracked.
+ */
+static void
+process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+{
+	struct tablesync_start_time_mapping
+	{
+		Oid			relid;
+		TimestampTz last_start_time;
+	};
+	static HTAB *last_start_times = NULL;
+	ListCell   *lc;
+	bool		started_tx = false;
+	bool		should_exit = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription tables here. */
+	FetchTableStates(&started_tx);
+
+	/*
+	 * Prepare a hash table for tracking last start times of workers, to avoid
+	 * immediate restarts.  We don't need it if there are no tables that need
+	 * syncing.
+	 */
+	if (table_states_not_ready != NIL && !last_start_times)
+	{
+		HASHCTL		ctl;
+
+		ctl.keysize = sizeof(Oid);
+		ctl.entrysize = sizeof(struct tablesync_start_time_mapping);
+		last_start_times = hash_create("Logical replication table sync worker start times",
+									   256, &ctl, HASH_ELEM | HASH_BLOBS);
+	}
+
+	/*
+	 * Clean up the hash table when we're done with all tables (just to
+	 * release the bit of memory).
+	 */
+	else if (table_states_not_ready == NIL && last_start_times)
+	{
+		hash_destroy(last_start_times);
+		last_start_times = NULL;
+	}
+
+	/*
+	 * Process all tables that are being synchronized.
+	 */
+	foreach(lc, table_states_not_ready)
+	{
+		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+
+		if (rstate->state == SUBREL_STATE_SYNCDONE)
+		{
+			/*
+			 * Apply has caught up to the position where the table sync has
+			 * finished.  Mark the table as ready so that the apply will just
+			 * continue to replicate it normally.
+			 */
+			if (current_lsn >= rstate->lsn)
+			{
+				char		originname[NAMEDATALEN];
+
+				rstate->state = SUBREL_STATE_READY;
+				rstate->lsn = current_lsn;
+				if (!started_tx)
+				{
+					StartTransactionCommand();
+					started_tx = true;
+				}
+
+				/*
+				 * Remove the tablesync origin tracking if exists.
+				 *
+				 * There is a chance that the user is concurrently performing
+				 * refresh for the subscription where we remove the table
+				 * state and its origin or the tablesync worker would have
+				 * already removed this origin. We can't rely on tablesync
+				 * worker to remove the origin tracking as if there is any
+				 * error while dropping we won't restart it to drop the
+				 * origin. So passing missing_ok = true.
+				 */
+				ReplicationOriginNameForLogicalRep(MyLogicalRepWorker->subid,
+												   rstate->relid,
+												   originname,
+												   sizeof(originname));
+				replorigin_drop_by_name(originname, true, false);
+
+				/*
+				 * Update the state to READY only after the origin cleanup.
+				 */
+				UpdateSubscriptionRelState(MyLogicalRepWorker->subid,
+										   rstate->relid, rstate->state,
+										   rstate->lsn);
+			}
+		}
+		else
+		{
+			LogicalRepWorker *syncworker;
+
+			/*
+			 * Look for a sync worker for this relation.
+			 */
+			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												rstate->relid, false);
+
+			if (syncworker)
+			{
+				/* Found one, update our copy of its state */
+				SpinLockAcquire(&syncworker->relmutex);
+				rstate->state = syncworker->relstate;
+				rstate->lsn = syncworker->relstate_lsn;
+				if (rstate->state == SUBREL_STATE_SYNCWAIT)
+				{
+					/*
+					 * Sync worker is waiting for apply.  Tell sync worker it
+					 * can catchup now.
+					 */
+					syncworker->relstate = SUBREL_STATE_CATCHUP;
+					syncworker->relstate_lsn =
+						Max(syncworker->relstate_lsn, current_lsn);
+				}
+				SpinLockRelease(&syncworker->relmutex);
+
+				/* If we told worker to catch up, wait for it. */
+				if (rstate->state == SUBREL_STATE_SYNCWAIT)
+				{
+					/* Signal the sync worker, as it may be waiting for us. */
+					if (syncworker->proc)
+						logicalrep_worker_wakeup_ptr(syncworker);
+
+					/* Now safe to release the LWLock */
+					LWLockRelease(LogicalRepWorkerLock);
+
+					if (started_tx)
+					{
+						/*
+						 * We must commit the existing transaction to release
+						 * the existing locks before entering a busy loop.
+						 * This is required to avoid any undetected deadlocks
+						 * due to any existing lock as deadlock detector won't
+						 * be able to detect the waits on the latch.
+						 */
+						CommitTransactionCommand();
+						pgstat_report_stat(false);
+					}
+
+					/*
+					 * Enter busy loop and wait for synchronization worker to
+					 * reach expected state (or die trying).
+					 */
+					StartTransactionCommand();
+					started_tx = true;
+
+					wait_for_relation_state_change(rstate->relid,
+												   SUBREL_STATE_SYNCDONE);
+				}
+				else
+					LWLockRelease(LogicalRepWorkerLock);
+			}
+			else
+			{
+				/*
+				 * If there is no sync worker for this table yet, count
+				 * running sync workers for this subscription, while we have
+				 * the lock.
+				 */
+				int			nsyncworkers =
+					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+				/* Now safe to release the LWLock */
+				LWLockRelease(LogicalRepWorkerLock);
+
+				/*
+				 * If there are free sync worker slot(s), start a new sync
+				 * worker for the table.
+				 */
+				if (nsyncworkers < max_sync_workers_per_subscription)
+				{
+					TimestampTz now = GetCurrentTimestamp();
+					struct tablesync_start_time_mapping *hentry;
+					bool		found;
+
+					hentry = hash_search(last_start_times, &rstate->relid,
+										 HASH_ENTER, &found);
+
+					if (!found ||
+						TimestampDifferenceExceeds(hentry->last_start_time, now,
+												   wal_retrieve_retry_interval))
+					{
+						logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
+												 MyLogicalRepWorker->dbid,
+												 MySubscription->oid,
+												 MySubscription->name,
+												 MyLogicalRepWorker->userid,
+												 rstate->relid,
+												 DSM_HANDLE_INVALID);
+						hentry->last_start_time = now;
+					}
+				}
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		/*
+		 * Even when the two_phase mode is requested by the user, it remains
+		 * as 'pending' until all tablesyncs have reached READY state.
+		 *
+		 * When this happens, we restart the apply worker and (if the
+		 * conditions are still ok) then the two_phase tri-state will become
+		 * 'enabled' at that time.
+		 *
+		 * Note: If the subscription has no tables then leave the state as
+		 * PENDING, which allows ALTER SUBSCRIPTION ... REFRESH PUBLICATION to
+		 * work.
+		 */
+		if (MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING)
+		{
+			CommandCounterIncrement();	/* make updates visible */
+			if (AllTablesyncsReady())
+			{
+				ereport(LOG,
+						(errmsg("logical replication apply worker for subscription \"%s\" will restart so that two_phase can be enabled",
+								MySubscription->name)));
+				should_exit = true;
+			}
+		}
+
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (should_exit)
+	{
+		/*
+		 * Reset the last-start time for this worker so that the launcher will
+		 * restart it without waiting for wal_retrieve_retry_interval.
+		 */
+		ApplyLauncherForgetWorkerStartTime(MySubscription->oid);
+
+		proc_exit(0);
+	}
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+process_syncing_tables(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			process_syncing_tables_for_sync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			process_syncing_tables_for_apply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+static bool
+FetchTableStates(bool *started_tx)
+{
+	static bool has_subrels = false;
+
+	*started_tx = false;
+
+	if (table_states_validity != SYNC_TABLE_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch all non-ready tables. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY relations found then we know it does. But
+		 * if table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subrels = (table_states_not_ready != NIL) ||
+			HasSubscriptionRelations(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
+			table_states_validity = SYNC_TABLE_STATE_VALID;
+	}
+
+	return has_subrels;
+}
+
+/*
+ * If the subscription has no tables then return false.
+ *
+ * Otherwise, are all tablesyncs READY?
+ *
+ * Note: This function is not suitable to be called from outside of apply or
+ * tablesync workers because MySubscription needs to be already initialized.
+ */
+bool
+AllTablesyncsReady(void)
+{
+	bool		started_tx = false;
+	bool		has_subrels = false;
+
+	/* We need up-to-date sync state info for subscription tables here. */
+	has_subrels = FetchTableStates(&started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/*
+	 * Return false when there are no tables in subscription or not all tables
+	 * are in ready state; true otherwise.
+	 */
+	return has_subrels && (table_states_not_ready == NIL);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..17c4d2e76a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,13 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
-
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +134,7 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
+bool
 wait_for_relation_state_change(Oid relid, char expected_state)
 {
 	char		state;
@@ -274,15 +228,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,7 +236,7 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
+void
 process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
@@ -393,303 +338,6 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
 }
 
-/*
- * Handle table synchronization cooperation from the apply worker.
- *
- * Walk over all subscription tables that are individually tracked by the
- * apply process (currently, all that have state other than
- * SUBREL_STATE_READY) and manage synchronization for them.
- *
- * If there are tables that need synchronizing and are not being synchronized
- * yet, start sync workers for them (if there are free slots for sync
- * workers).  To prevent starting the sync worker for the same relation at a
- * high frequency after a failure, we store its last start time with each sync
- * state info.  We start the sync worker for the same relation after waiting
- * at least wal_retrieve_retry_interval.
- *
- * For tables that are being synchronized already, check if sync workers
- * either need action from the apply worker or have finished.  This is the
- * SYNCWAIT to CATCHUP transition.
- *
- * If the synchronization position is reached (SYNCDONE), then the table can
- * be marked as READY and is no longer tracked.
- */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
-{
-	struct tablesync_start_time_mapping
-	{
-		Oid			relid;
-		TimestampTz last_start_time;
-	};
-	static HTAB *last_start_times = NULL;
-	ListCell   *lc;
-	bool		started_tx = false;
-	bool		should_exit = false;
-
-	Assert(!IsTransactionState());
-
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
-	/*
-	 * Prepare a hash table for tracking last start times of workers, to avoid
-	 * immediate restarts.  We don't need it if there are no tables that need
-	 * syncing.
-	 */
-	if (table_states_not_ready != NIL && !last_start_times)
-	{
-		HASHCTL		ctl;
-
-		ctl.keysize = sizeof(Oid);
-		ctl.entrysize = sizeof(struct tablesync_start_time_mapping);
-		last_start_times = hash_create("Logical replication table sync worker start times",
-									   256, &ctl, HASH_ELEM | HASH_BLOBS);
-	}
-
-	/*
-	 * Clean up the hash table when we're done with all tables (just to
-	 * release the bit of memory).
-	 */
-	else if (table_states_not_ready == NIL && last_start_times)
-	{
-		hash_destroy(last_start_times);
-		last_start_times = NULL;
-	}
-
-	/*
-	 * Process all tables that are being synchronized.
-	 */
-	foreach(lc, table_states_not_ready)
-	{
-		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
-
-		if (rstate->state == SUBREL_STATE_SYNCDONE)
-		{
-			/*
-			 * Apply has caught up to the position where the table sync has
-			 * finished.  Mark the table as ready so that the apply will just
-			 * continue to replicate it normally.
-			 */
-			if (current_lsn >= rstate->lsn)
-			{
-				char		originname[NAMEDATALEN];
-
-				rstate->state = SUBREL_STATE_READY;
-				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
-
-				/*
-				 * Remove the tablesync origin tracking if exists.
-				 *
-				 * There is a chance that the user is concurrently performing
-				 * refresh for the subscription where we remove the table
-				 * state and its origin or the tablesync worker would have
-				 * already removed this origin. We can't rely on tablesync
-				 * worker to remove the origin tracking as if there is any
-				 * error while dropping we won't restart it to drop the
-				 * origin. So passing missing_ok = true.
-				 */
-				ReplicationOriginNameForLogicalRep(MyLogicalRepWorker->subid,
-												   rstate->relid,
-												   originname,
-												   sizeof(originname));
-				replorigin_drop_by_name(originname, true, false);
-
-				/*
-				 * Update the state to READY only after the origin cleanup.
-				 */
-				UpdateSubscriptionRelState(MyLogicalRepWorker->subid,
-										   rstate->relid, rstate->state,
-										   rstate->lsn);
-			}
-		}
-		else
-		{
-			LogicalRepWorker *syncworker;
-
-			/*
-			 * Look for a sync worker for this relation.
-			 */
-			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-
-			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
-			if (syncworker)
-			{
-				/* Found one, update our copy of its state */
-				SpinLockAcquire(&syncworker->relmutex);
-				rstate->state = syncworker->relstate;
-				rstate->lsn = syncworker->relstate_lsn;
-				if (rstate->state == SUBREL_STATE_SYNCWAIT)
-				{
-					/*
-					 * Sync worker is waiting for apply.  Tell sync worker it
-					 * can catchup now.
-					 */
-					syncworker->relstate = SUBREL_STATE_CATCHUP;
-					syncworker->relstate_lsn =
-						Max(syncworker->relstate_lsn, current_lsn);
-				}
-				SpinLockRelease(&syncworker->relmutex);
-
-				/* If we told worker to catch up, wait for it. */
-				if (rstate->state == SUBREL_STATE_SYNCWAIT)
-				{
-					/* Signal the sync worker, as it may be waiting for us. */
-					if (syncworker->proc)
-						logicalrep_worker_wakeup_ptr(syncworker);
-
-					/* Now safe to release the LWLock */
-					LWLockRelease(LogicalRepWorkerLock);
-
-					if (started_tx)
-					{
-						/*
-						 * We must commit the existing transaction to release
-						 * the existing locks before entering a busy loop.
-						 * This is required to avoid any undetected deadlocks
-						 * due to any existing lock as deadlock detector won't
-						 * be able to detect the waits on the latch.
-						 */
-						CommitTransactionCommand();
-						pgstat_report_stat(false);
-					}
-
-					/*
-					 * Enter busy loop and wait for synchronization worker to
-					 * reach expected state (or die trying).
-					 */
-					StartTransactionCommand();
-					started_tx = true;
-
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
-				}
-				else
-					LWLockRelease(LogicalRepWorkerLock);
-			}
-			else
-			{
-				/*
-				 * If there is no sync worker for this table yet, count
-				 * running sync workers for this subscription, while we have
-				 * the lock.
-				 */
-				int			nsyncworkers =
-					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
-
-				/* Now safe to release the LWLock */
-				LWLockRelease(LogicalRepWorkerLock);
-
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-												 MyLogicalRepWorker->dbid,
-												 MySubscription->oid,
-												 MySubscription->name,
-												 MyLogicalRepWorker->userid,
-												 rstate->relid,
-												 DSM_HANDLE_INVALID);
-						hentry->last_start_time = now;
-					}
-				}
-			}
-		}
-	}
-
-	if (started_tx)
-	{
-		/*
-		 * Even when the two_phase mode is requested by the user, it remains
-		 * as 'pending' until all tablesyncs have reached READY state.
-		 *
-		 * When this happens, we restart the apply worker and (if the
-		 * conditions are still ok) then the two_phase tri-state will become
-		 * 'enabled' at that time.
-		 *
-		 * Note: If the subscription has no tables then leave the state as
-		 * PENDING, which allows ALTER SUBSCRIPTION ... REFRESH PUBLICATION to
-		 * work.
-		 */
-		if (MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING)
-		{
-			CommandCounterIncrement();	/* make updates visible */
-			if (AllTablesyncsReady())
-			{
-				ereport(LOG,
-						(errmsg("logical replication apply worker for subscription \"%s\" will restart so that two_phase can be enabled",
-								MySubscription->name)));
-				should_exit = true;
-			}
-		}
-
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	if (should_exit)
-	{
-		/*
-		 * Reset the last-start time for this worker so that the launcher will
-		 * restart it without waiting for wal_retrieve_retry_interval.
-		 */
-		ApplyLauncherForgetWorkerStartTime(MySubscription->oid);
-
-		proc_exit(0);
-	}
-}
-
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
 
 /*
  * Create list of columns for COPY based on logical relation mapping.
@@ -1561,77 +1209,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1720,36 +1297,6 @@ TablesyncWorkerMain(Datum main_arg)
 	finish_sync_worker();
 }
 
-/*
- * If the subscription has no tables then return false.
- *
- * Otherwise, are all tablesyncs READY?
- *
- * Note: This function is not suitable to be called from outside of apply or
- * tablesync workers because MySubscription needs to be already initialized.
- */
-bool
-AllTablesyncsReady(void)
-{
-	bool		started_tx = false;
-	bool		has_subrels = false;
-
-	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/*
-	 * Return false when there are no tables in subscription or not all tables
-	 * are in ready state; true otherwise.
-	 */
-	return has_subrels && (table_states_not_ready == NIL);
-}
-
 /*
  * Update the two_phase state of the specified subscription in pg_subscription.
  */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6ff5643132 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -250,6 +250,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() finish_sync_worker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,7 +260,9 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
+extern bool wait_for_relation_state_change(Oid relid, char expected_state);
 extern void process_syncing_tables(XLogRecPtr current_lsn);
+extern void process_syncing_tables_for_sync(XLogRecPtr current_lsn);
 extern void invalidate_syncing_table_states(Datum arg, int cacheid,
 											uint32 hashvalue);
 
-- 
2.34.1

v20240817-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240817-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 2b266620964f1b7ea944c38f80925d84309b4b79 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240817 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 749360a4b7..03eb1bf6fc 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19627,6 +19627,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..5ede8442b4 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240817-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240817-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 7c4b6ecc23600283845e3569684350f5928e30cb Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240817 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 705 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..64214ba8d5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 7fe5fe2b86..239799f987 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -979,6 +980,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1001,6 +1038,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index d6ffef374e..b081b7249b 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1486,7 +1490,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1904,12 +1908,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c3f25582c3..21a7f67256 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10537,7 +10543,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10557,13 +10568,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10675,6 +10686,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19386,6 +19419,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index d9518a58b0..cb52303248 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 660245ed0c..d40c01e347 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -730,10 +802,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -917,10 +989,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1125,10 +1197,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1166,10 +1238,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1247,10 +1319,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1260,20 +1332,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1289,19 +1361,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1315,44 +1387,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1386,10 +1458,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1397,20 +1469,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1418,10 +1490,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1430,10 +1502,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1442,10 +1514,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1453,10 +1525,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1464,10 +1536,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1475,29 +1547,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1506,10 +1578,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1518,10 +1590,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1600,18 +1672,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1621,20 +1693,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index f68a5b5986..61f98a8b2f 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..0ce48da963 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

#159Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#158)
Re: Logical Replication of sequences

Here are my review comments for the latest patchset

v20240817-0001. No changes. No comments.
v20240817-0002. No changes. No comments.
v20240817-0003. See below.
v20240817-0004. See below.
v20240817-0005. No changes. No comments.

//////

v20240817-0003 and 0004.

(This is a repeat of the same comment as in previous reviews, but lots
more functions seem affected now)

IIUC, the LR code tries to follow function naming conventions (e.g.
CamelCase/snake_case for exposed/static functions respectively),
intended to make the code more readable. But, this only works if the
conventions are followed.

Now, patches 0003 and 0004 are shuffling more and more functions
between modules while changing them from static to non-static (or vice
versa). So, the function name conventions are being violated many
times. IMO these functions ought to be renamed according to their new
modifiers to avoid the confusion caused by ignoring the name
conventions.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#160vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#159)
5 attachment(s)
Re: Logical Replication of sequences

On Mon, 19 Aug 2024 at 07:47, Peter Smith <smithpb2250@gmail.com> wrote:

Here are my review comments for the latest patchset

v20240817-0001. No changes. No comments.
v20240817-0002. No changes. No comments.
v20240817-0003. See below.
v20240817-0004. See below.
v20240817-0005. No changes. No comments.

//////

v20240817-0003 and 0004.

(This is a repeat of the same comment as in previous reviews, but lots
more functions seem affected now)

IIUC, the LR code tries to follow function naming conventions (e.g.
CamelCase/snake_case for exposed/static functions respectively),
intended to make the code more readable. But, this only works if the
conventions are followed.

Now, patches 0003 and 0004 are shuffling more and more functions
between modules while changing them from static to non-static (or vice
versa). So, the function name conventions are being violated many
times. IMO these functions ought to be renamed according to their new
modifiers to avoid the confusion caused by ignoring the name
conventions.

I have handled these in the v20240819 version patch attached.

Regards,
Vignesh

Attachments:

v20240819-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240819-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 17b9a32004947a29b29c558fd87e21d0b20622b9 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240819 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 749360a4b7..03eb1bf6fc 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19627,6 +19627,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..5ede8442b4 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240819-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240819-0004-Enhance-sequence-synchronization-during-su.patchDownload
From deb8822a64193126ff39ca8cf852e0d84ae2aeb5 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 19 Aug 2024 16:16:09 +0530
Subject: [PATCH v20240819 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  58 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 ++++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  70 ++-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 529 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 210 +++++--
 src/backend/replication/logical/tablesync.c   |  10 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  25 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 ++++++
 src/tools/pgindent/typedefs.list              |   2 +-
 27 files changed, 1428 insertions(+), 164 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 239799f987..6b961a286b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1274,3 +1274,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 077903f059..af2bfe1364 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -468,7 +471,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +511,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -507,6 +534,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -514,7 +544,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -529,8 +559,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..3c861604e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1896,6 +1898,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index b925c464ae..bbe4346f27 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, stmt->subname,
+														publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +798,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +826,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +865,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +926,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +950,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->name,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +979,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1004,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,8 +1021,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -968,11 +1039,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,41 +1085,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1039,6 +1140,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1544,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1585,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1604,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1660,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1902,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1929,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2287,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2465,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index d0a89cd577..fdf69e4f28 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -739,7 +739,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21a7f67256..5b14393015 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10817,11 +10817,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b83967cda3..86bc9d60a6 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 3964a30109..99d248dd01 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -25,6 +25,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 27a0e30ab7..c3c836b88b 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -11,6 +11,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..aa7938ca1f
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,529 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[10] = {INT8OID, INT8OID, BOOLOID, LSNOID, OIDOID,
+	INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 10, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, 5, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, 8, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, 9, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, 10, &isnull));
+	Assert(!isnull);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 378b8eba04..cdafcfa77d 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -34,15 +34,18 @@ typedef enum
 
 static SyncingRelationsState relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
 static List *table_states_not_ready = NIL;
-static bool FetchRelationStates(bool *started_tx);
+static List *sequence_states_not_ready = NIL;
+static bool FetchRelationStates(void);
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -57,15 +60,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -183,7 +195,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -225,9 +237,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -260,6 +269,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -273,11 +290,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -314,7 +326,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
@@ -456,8 +469,107 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+static void
+process_syncing_sequences_for_apply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -478,7 +590,20 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			process_syncing_tables_for_apply(current_lsn);
+			process_syncing_sequences_for_apply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -490,17 +615,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 static bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(void)
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATION_STATE_VALID)
 	{
@@ -513,16 +643,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -530,15 +663,19 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
 		/*
 		 * Does the subscription have tables?
 		 *
-		 * If there were not-READY tables found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
 		has_subtables = (table_states_not_ready != NIL) ||
@@ -555,6 +692,12 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATION_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
 
@@ -569,17 +712,10 @@ FetchRelationStates(bool *started_tx)
 bool
 SyncAllTablesReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index cea5e09a8c..2b095d36bd 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -159,7 +159,7 @@ wait_for_relation_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -206,7 +206,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -857,7 +857,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1175,7 +1175,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1183,7 +1183,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6ef21c4273..23243a114f 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -486,6 +486,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1025,7 +1030,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1147,7 +1155,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1203,7 +1214,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1269,7 +1283,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1404,7 +1421,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2246,7 +2266,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3644,7 +3667,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4554,8 +4580,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4634,6 +4660,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4642,14 +4672,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4681,6 +4714,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 79ecaa4c4c..7d5c4e0b22 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3239,7 +3239,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 5ede8442b4..c9fd1fa26b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12015,6 +12015,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index ff25874d5e..f1e510ca70 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -239,6 +242,7 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -246,14 +250,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -328,15 +335,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..d74b9a8259 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..0d89651697
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for paraemters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 0ce48da963..6595a46692 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2788,13 +2788,13 @@ SupportRequestSelectivity
 SupportRequestSimplify
 SupportRequestWFuncMonotonic
 Syn
+SyncingRelationsState
 SyncOps
 SyncRepConfigData
 SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

v20240819-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20240819-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 61c07d967a7b04be73d2bad1379899695d4b4415 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20240819 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   4 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 589 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 570 +----------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  10 +-
 9 files changed, 614 insertions(+), 585 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..077903f059 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -457,13 +457,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index ba03eeff1c..3964a30109 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -27,6 +27,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..983ead5929 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -309,7 +309,7 @@ pa_can_start(void)
 	 * should_apply_changes_for_rel) as we won't know remote_final_lsn by that
 	 * time. So, we don't start the new parallel apply worker in this case.
 	 */
-	if (!AllTablesyncsReady())
+	if (!SyncAllTablesReady())
 		return false;
 
 	return true;
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3dec36a6de..27a0e30ab7 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -13,6 +13,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..378b8eba04
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,589 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_RELATION_STATE_NEEDS_REBUILD,
+	SYNC_RELATION_STATE_REBUILD_STARTED,
+	SYNC_RELATION_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
+static List *table_states_not_ready = NIL;
+static bool FetchRelationStates(bool *started_tx);
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATION_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Handle table synchronization cooperation from the synchronization
+ * worker.
+ *
+ * If the sync worker is in CATCHUP state and reached (or passed) the
+ * predetermined synchronization point in the WAL stream, mark the table as
+ * SYNCDONE and finish.
+ */
+static void
+process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+{
+	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
+
+	if (MyLogicalRepWorker->relstate == SUBREL_STATE_CATCHUP &&
+		current_lsn >= MyLogicalRepWorker->relstate_lsn)
+	{
+		TimeLineID	tli;
+		char		syncslotname[NAMEDATALEN] = {0};
+		char		originname[NAMEDATALEN] = {0};
+
+		MyLogicalRepWorker->relstate = SUBREL_STATE_SYNCDONE;
+		MyLogicalRepWorker->relstate_lsn = current_lsn;
+
+		SpinLockRelease(&MyLogicalRepWorker->relmutex);
+
+		/*
+		 * UpdateSubscriptionRelState must be called within a transaction.
+		 */
+		if (!IsTransactionState())
+			StartTransactionCommand();
+
+		UpdateSubscriptionRelState(MyLogicalRepWorker->subid,
+								   MyLogicalRepWorker->relid,
+								   MyLogicalRepWorker->relstate,
+								   MyLogicalRepWorker->relstate_lsn);
+
+		/*
+		 * End streaming so that LogRepWorkerWalRcvConn can be used to drop
+		 * the slot.
+		 */
+		walrcv_endstreaming(LogRepWorkerWalRcvConn, &tli);
+
+		/*
+		 * Cleanup the tablesync slot.
+		 *
+		 * This has to be done after updating the state because otherwise if
+		 * there is an error while doing the database operations we won't be
+		 * able to rollback dropped slot.
+		 */
+		ReplicationSlotNameForTablesync(MyLogicalRepWorker->subid,
+										MyLogicalRepWorker->relid,
+										syncslotname,
+										sizeof(syncslotname));
+
+		/*
+		 * It is important to give an error if we are unable to drop the slot,
+		 * otherwise, it won't be dropped till the corresponding subscription
+		 * is dropped. So passing missing_ok = false.
+		 */
+		ReplicationSlotDropAtPubNode(LogRepWorkerWalRcvConn, syncslotname, false);
+
+		CommitTransactionCommand();
+		pgstat_report_stat(false);
+
+		/*
+		 * Start a new transaction to clean up the tablesync origin tracking.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
+		 *
+		 * We need to do this after the table state is set to SYNCDONE.
+		 * Otherwise, if an error occurs while performing the database
+		 * operation, the worker will be restarted and the in-memory state of
+		 * replication progress (remote_lsn) won't be rolled-back which would
+		 * have been cleared before restart. So, the restarted worker will use
+		 * invalid replication progress state resulting in replay of
+		 * transactions that have already been applied.
+		 */
+		StartTransactionCommand();
+
+		ReplicationOriginNameForLogicalRep(MyLogicalRepWorker->subid,
+										   MyLogicalRepWorker->relid,
+										   originname,
+										   sizeof(originname));
+
+		/*
+		 * Resetting the origin session removes the ownership of the slot.
+		 * This is needed to allow the origin to be dropped.
+		 */
+		replorigin_session_reset();
+		replorigin_session_origin = InvalidRepOriginId;
+		replorigin_session_origin_lsn = InvalidXLogRecPtr;
+		replorigin_session_origin_timestamp = 0;
+
+		/*
+		 * Drop the tablesync's origin tracking if exists.
+		 *
+		 * There is a chance that the user is concurrently performing refresh
+		 * for the subscription where we remove the table state and its origin
+		 * or the apply worker would have removed this origin. So passing
+		 * missing_ok = true.
+		 */
+		replorigin_drop_by_name(originname, true, false);
+
+		SyncFinishWorker();
+	}
+	else
+		SpinLockRelease(&MyLogicalRepWorker->relmutex);
+}
+
+/*
+ * Handle table synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription tables that are individually tracked by the
+ * apply process (currently, all that have state other than
+ * SUBREL_STATE_READY) and manage synchronization for them.
+ *
+ * If there are tables that need synchronizing and are not being synchronized
+ * yet, start sync workers for them (if there are free slots for sync
+ * workers).  To prevent starting the sync worker for the same relation at a
+ * high frequency after a failure, we store its last start time with each sync
+ * state info.  We start the sync worker for the same relation after waiting
+ * at least wal_retrieve_retry_interval.
+ *
+ * For tables that are being synchronized already, check if sync workers
+ * either need action from the apply worker or have finished.  This is the
+ * SYNCWAIT to CATCHUP transition.
+ *
+ * If the synchronization position is reached (SYNCDONE), then the table can
+ * be marked as READY and is no longer tracked.
+ */
+static void
+process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+{
+	struct tablesync_start_time_mapping
+	{
+		Oid			relid;
+		TimestampTz last_start_time;
+	};
+	static HTAB *last_start_times = NULL;
+	ListCell   *lc;
+	bool		started_tx = false;
+	bool		should_exit = false;
+
+	Assert(!IsTransactionState());
+
+	/* We need up-to-date sync state info for subscription tables here. */
+	FetchRelationStates(&started_tx);
+
+	/*
+	 * Prepare a hash table for tracking last start times of workers, to avoid
+	 * immediate restarts.  We don't need it if there are no tables that need
+	 * syncing.
+	 */
+	if (table_states_not_ready != NIL && !last_start_times)
+	{
+		HASHCTL		ctl;
+
+		ctl.keysize = sizeof(Oid);
+		ctl.entrysize = sizeof(struct tablesync_start_time_mapping);
+		last_start_times = hash_create("Logical replication table sync worker start times",
+									   256, &ctl, HASH_ELEM | HASH_BLOBS);
+	}
+
+	/*
+	 * Clean up the hash table when we're done with all tables (just to
+	 * release the bit of memory).
+	 */
+	else if (table_states_not_ready == NIL && last_start_times)
+	{
+		hash_destroy(last_start_times);
+		last_start_times = NULL;
+	}
+
+	/*
+	 * Process all tables that are being synchronized.
+	 */
+	foreach(lc, table_states_not_ready)
+	{
+		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+
+		if (rstate->state == SUBREL_STATE_SYNCDONE)
+		{
+			/*
+			 * Apply has caught up to the position where the table sync has
+			 * finished.  Mark the table as ready so that the apply will just
+			 * continue to replicate it normally.
+			 */
+			if (current_lsn >= rstate->lsn)
+			{
+				char		originname[NAMEDATALEN];
+
+				rstate->state = SUBREL_STATE_READY;
+				rstate->lsn = current_lsn;
+				if (!started_tx)
+				{
+					StartTransactionCommand();
+					started_tx = true;
+				}
+
+				/*
+				 * Remove the tablesync origin tracking if exists.
+				 *
+				 * There is a chance that the user is concurrently performing
+				 * refresh for the subscription where we remove the table
+				 * state and its origin or the tablesync worker would have
+				 * already removed this origin. We can't rely on tablesync
+				 * worker to remove the origin tracking as if there is any
+				 * error while dropping we won't restart it to drop the
+				 * origin. So passing missing_ok = true.
+				 */
+				ReplicationOriginNameForLogicalRep(MyLogicalRepWorker->subid,
+												   rstate->relid,
+												   originname,
+												   sizeof(originname));
+				replorigin_drop_by_name(originname, true, false);
+
+				/*
+				 * Update the state to READY only after the origin cleanup.
+				 */
+				UpdateSubscriptionRelState(MyLogicalRepWorker->subid,
+										   rstate->relid, rstate->state,
+										   rstate->lsn);
+			}
+		}
+		else
+		{
+			LogicalRepWorker *syncworker;
+
+			/*
+			 * Look for a sync worker for this relation.
+			 */
+			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												rstate->relid, false);
+
+			if (syncworker)
+			{
+				/* Found one, update our copy of its state */
+				SpinLockAcquire(&syncworker->relmutex);
+				rstate->state = syncworker->relstate;
+				rstate->lsn = syncworker->relstate_lsn;
+				if (rstate->state == SUBREL_STATE_SYNCWAIT)
+				{
+					/*
+					 * Sync worker is waiting for apply.  Tell sync worker it
+					 * can catchup now.
+					 */
+					syncworker->relstate = SUBREL_STATE_CATCHUP;
+					syncworker->relstate_lsn =
+						Max(syncworker->relstate_lsn, current_lsn);
+				}
+				SpinLockRelease(&syncworker->relmutex);
+
+				/* If we told worker to catch up, wait for it. */
+				if (rstate->state == SUBREL_STATE_SYNCWAIT)
+				{
+					/* Signal the sync worker, as it may be waiting for us. */
+					if (syncworker->proc)
+						logicalrep_worker_wakeup_ptr(syncworker);
+
+					/* Now safe to release the LWLock */
+					LWLockRelease(LogicalRepWorkerLock);
+
+					if (started_tx)
+					{
+						/*
+						 * We must commit the existing transaction to release
+						 * the existing locks before entering a busy loop.
+						 * This is required to avoid any undetected deadlocks
+						 * due to any existing lock as deadlock detector won't
+						 * be able to detect the waits on the latch.
+						 */
+						CommitTransactionCommand();
+						pgstat_report_stat(false);
+					}
+
+					/*
+					 * Enter busy loop and wait for synchronization worker to
+					 * reach expected state (or die trying).
+					 */
+					StartTransactionCommand();
+					started_tx = true;
+
+					wait_for_relation_state_change(rstate->relid,
+												   SUBREL_STATE_SYNCDONE);
+				}
+				else
+					LWLockRelease(LogicalRepWorkerLock);
+			}
+			else
+			{
+				/*
+				 * If there is no sync worker for this table yet, count
+				 * running sync workers for this subscription, while we have
+				 * the lock.
+				 */
+				int			nsyncworkers =
+					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+				/* Now safe to release the LWLock */
+				LWLockRelease(LogicalRepWorkerLock);
+
+				/*
+				 * If there are free sync worker slot(s), start a new sync
+				 * worker for the table.
+				 */
+				if (nsyncworkers < max_sync_workers_per_subscription)
+				{
+					TimestampTz now = GetCurrentTimestamp();
+					struct tablesync_start_time_mapping *hentry;
+					bool		found;
+
+					hentry = hash_search(last_start_times, &rstate->relid,
+										 HASH_ENTER, &found);
+
+					if (!found ||
+						TimestampDifferenceExceeds(hentry->last_start_time, now,
+												   wal_retrieve_retry_interval))
+					{
+						logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
+												 MyLogicalRepWorker->dbid,
+												 MySubscription->oid,
+												 MySubscription->name,
+												 MyLogicalRepWorker->userid,
+												 rstate->relid,
+												 DSM_HANDLE_INVALID);
+						hentry->last_start_time = now;
+					}
+				}
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		/*
+		 * Even when the two_phase mode is requested by the user, it remains
+		 * as 'pending' until all tablesyncs have reached READY state.
+		 *
+		 * When this happens, we restart the apply worker and (if the
+		 * conditions are still ok) then the two_phase tri-state will become
+		 * 'enabled' at that time.
+		 *
+		 * Note: If the subscription has no tables then leave the state as
+		 * PENDING, which allows ALTER SUBSCRIPTION ... REFRESH PUBLICATION to
+		 * work.
+		 */
+		if (MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING)
+		{
+			CommandCounterIncrement();	/* make updates visible */
+			if (SyncAllTablesReady())
+			{
+				ereport(LOG,
+						(errmsg("logical replication apply worker for subscription \"%s\" will restart so that two_phase can be enabled",
+								MySubscription->name)));
+				should_exit = true;
+			}
+		}
+
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (should_exit)
+	{
+		/*
+		 * Reset the last-start time for this worker so that the launcher will
+		 * restart it without waiting for wal_retrieve_retry_interval.
+		 */
+		ApplyLauncherForgetWorkerStartTime(MySubscription->oid);
+
+		proc_exit(0);
+	}
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			process_syncing_tables_for_sync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			process_syncing_tables_for_apply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+static bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATION_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATION_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch all non-ready tables. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But
+		 * if table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATION_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATION_STATE_VALID;
+	}
+
+	return has_subtables;
+}
+
+/*
+ * If the subscription has no tables then return false.
+ *
+ * Otherwise, are all tablesyncs READY?
+ *
+ * Note: This function is not suitable to be called from outside of apply or
+ * tablesync workers because MySubscription needs to be already initialized.
+ */
+bool
+SyncAllTablesReady(void)
+{
+	bool		started_tx = false;
+	bool		has_subrels = false;
+
+	/* We need up-to-date sync state info for subscription tables here. */
+	has_subrels = FetchRelationStates(&started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/*
+	 * Return false when there are no tables in subscription or not all tables
+	 * are in ready state; true otherwise.
+	 */
+	return has_subrels && (table_states_not_ready == NIL);
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..cea5e09a8c 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,13 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
-
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +134,7 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
+bool
 wait_for_relation_state_change(Oid relid, char expected_state)
 {
 	char		state;
@@ -274,423 +228,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
-/*
- * Handle table synchronization cooperation from the synchronization
- * worker.
- *
- * If the sync worker is in CATCHUP state and reached (or passed) the
- * predetermined synchronization point in the WAL stream, mark the table as
- * SYNCDONE and finish.
- */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
-{
-	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
-
-	if (MyLogicalRepWorker->relstate == SUBREL_STATE_CATCHUP &&
-		current_lsn >= MyLogicalRepWorker->relstate_lsn)
-	{
-		TimeLineID	tli;
-		char		syncslotname[NAMEDATALEN] = {0};
-		char		originname[NAMEDATALEN] = {0};
-
-		MyLogicalRepWorker->relstate = SUBREL_STATE_SYNCDONE;
-		MyLogicalRepWorker->relstate_lsn = current_lsn;
-
-		SpinLockRelease(&MyLogicalRepWorker->relmutex);
-
-		/*
-		 * UpdateSubscriptionRelState must be called within a transaction.
-		 */
-		if (!IsTransactionState())
-			StartTransactionCommand();
-
-		UpdateSubscriptionRelState(MyLogicalRepWorker->subid,
-								   MyLogicalRepWorker->relid,
-								   MyLogicalRepWorker->relstate,
-								   MyLogicalRepWorker->relstate_lsn);
-
-		/*
-		 * End streaming so that LogRepWorkerWalRcvConn can be used to drop
-		 * the slot.
-		 */
-		walrcv_endstreaming(LogRepWorkerWalRcvConn, &tli);
-
-		/*
-		 * Cleanup the tablesync slot.
-		 *
-		 * This has to be done after updating the state because otherwise if
-		 * there is an error while doing the database operations we won't be
-		 * able to rollback dropped slot.
-		 */
-		ReplicationSlotNameForTablesync(MyLogicalRepWorker->subid,
-										MyLogicalRepWorker->relid,
-										syncslotname,
-										sizeof(syncslotname));
-
-		/*
-		 * It is important to give an error if we are unable to drop the slot,
-		 * otherwise, it won't be dropped till the corresponding subscription
-		 * is dropped. So passing missing_ok = false.
-		 */
-		ReplicationSlotDropAtPubNode(LogRepWorkerWalRcvConn, syncslotname, false);
-
-		CommitTransactionCommand();
-		pgstat_report_stat(false);
-
-		/*
-		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
-		 *
-		 * We need to do this after the table state is set to SYNCDONE.
-		 * Otherwise, if an error occurs while performing the database
-		 * operation, the worker will be restarted and the in-memory state of
-		 * replication progress (remote_lsn) won't be rolled-back which would
-		 * have been cleared before restart. So, the restarted worker will use
-		 * invalid replication progress state resulting in replay of
-		 * transactions that have already been applied.
-		 */
-		StartTransactionCommand();
-
-		ReplicationOriginNameForLogicalRep(MyLogicalRepWorker->subid,
-										   MyLogicalRepWorker->relid,
-										   originname,
-										   sizeof(originname));
-
-		/*
-		 * Resetting the origin session removes the ownership of the slot.
-		 * This is needed to allow the origin to be dropped.
-		 */
-		replorigin_session_reset();
-		replorigin_session_origin = InvalidRepOriginId;
-		replorigin_session_origin_lsn = InvalidXLogRecPtr;
-		replorigin_session_origin_timestamp = 0;
-
-		/*
-		 * Drop the tablesync's origin tracking if exists.
-		 *
-		 * There is a chance that the user is concurrently performing refresh
-		 * for the subscription where we remove the table state and its origin
-		 * or the apply worker would have removed this origin. So passing
-		 * missing_ok = true.
-		 */
-		replorigin_drop_by_name(originname, true, false);
-
-		finish_sync_worker();
-	}
-	else
-		SpinLockRelease(&MyLogicalRepWorker->relmutex);
-}
-
-/*
- * Handle table synchronization cooperation from the apply worker.
- *
- * Walk over all subscription tables that are individually tracked by the
- * apply process (currently, all that have state other than
- * SUBREL_STATE_READY) and manage synchronization for them.
- *
- * If there are tables that need synchronizing and are not being synchronized
- * yet, start sync workers for them (if there are free slots for sync
- * workers).  To prevent starting the sync worker for the same relation at a
- * high frequency after a failure, we store its last start time with each sync
- * state info.  We start the sync worker for the same relation after waiting
- * at least wal_retrieve_retry_interval.
- *
- * For tables that are being synchronized already, check if sync workers
- * either need action from the apply worker or have finished.  This is the
- * SYNCWAIT to CATCHUP transition.
- *
- * If the synchronization position is reached (SYNCDONE), then the table can
- * be marked as READY and is no longer tracked.
- */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
-{
-	struct tablesync_start_time_mapping
-	{
-		Oid			relid;
-		TimestampTz last_start_time;
-	};
-	static HTAB *last_start_times = NULL;
-	ListCell   *lc;
-	bool		started_tx = false;
-	bool		should_exit = false;
-
-	Assert(!IsTransactionState());
-
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
-
-	/*
-	 * Prepare a hash table for tracking last start times of workers, to avoid
-	 * immediate restarts.  We don't need it if there are no tables that need
-	 * syncing.
-	 */
-	if (table_states_not_ready != NIL && !last_start_times)
-	{
-		HASHCTL		ctl;
-
-		ctl.keysize = sizeof(Oid);
-		ctl.entrysize = sizeof(struct tablesync_start_time_mapping);
-		last_start_times = hash_create("Logical replication table sync worker start times",
-									   256, &ctl, HASH_ELEM | HASH_BLOBS);
-	}
-
-	/*
-	 * Clean up the hash table when we're done with all tables (just to
-	 * release the bit of memory).
-	 */
-	else if (table_states_not_ready == NIL && last_start_times)
-	{
-		hash_destroy(last_start_times);
-		last_start_times = NULL;
-	}
-
-	/*
-	 * Process all tables that are being synchronized.
-	 */
-	foreach(lc, table_states_not_ready)
-	{
-		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
-
-		if (rstate->state == SUBREL_STATE_SYNCDONE)
-		{
-			/*
-			 * Apply has caught up to the position where the table sync has
-			 * finished.  Mark the table as ready so that the apply will just
-			 * continue to replicate it normally.
-			 */
-			if (current_lsn >= rstate->lsn)
-			{
-				char		originname[NAMEDATALEN];
-
-				rstate->state = SUBREL_STATE_READY;
-				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
-
-				/*
-				 * Remove the tablesync origin tracking if exists.
-				 *
-				 * There is a chance that the user is concurrently performing
-				 * refresh for the subscription where we remove the table
-				 * state and its origin or the tablesync worker would have
-				 * already removed this origin. We can't rely on tablesync
-				 * worker to remove the origin tracking as if there is any
-				 * error while dropping we won't restart it to drop the
-				 * origin. So passing missing_ok = true.
-				 */
-				ReplicationOriginNameForLogicalRep(MyLogicalRepWorker->subid,
-												   rstate->relid,
-												   originname,
-												   sizeof(originname));
-				replorigin_drop_by_name(originname, true, false);
-
-				/*
-				 * Update the state to READY only after the origin cleanup.
-				 */
-				UpdateSubscriptionRelState(MyLogicalRepWorker->subid,
-										   rstate->relid, rstate->state,
-										   rstate->lsn);
-			}
-		}
-		else
-		{
-			LogicalRepWorker *syncworker;
-
-			/*
-			 * Look for a sync worker for this relation.
-			 */
-			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-
-			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
-			if (syncworker)
-			{
-				/* Found one, update our copy of its state */
-				SpinLockAcquire(&syncworker->relmutex);
-				rstate->state = syncworker->relstate;
-				rstate->lsn = syncworker->relstate_lsn;
-				if (rstate->state == SUBREL_STATE_SYNCWAIT)
-				{
-					/*
-					 * Sync worker is waiting for apply.  Tell sync worker it
-					 * can catchup now.
-					 */
-					syncworker->relstate = SUBREL_STATE_CATCHUP;
-					syncworker->relstate_lsn =
-						Max(syncworker->relstate_lsn, current_lsn);
-				}
-				SpinLockRelease(&syncworker->relmutex);
-
-				/* If we told worker to catch up, wait for it. */
-				if (rstate->state == SUBREL_STATE_SYNCWAIT)
-				{
-					/* Signal the sync worker, as it may be waiting for us. */
-					if (syncworker->proc)
-						logicalrep_worker_wakeup_ptr(syncworker);
-
-					/* Now safe to release the LWLock */
-					LWLockRelease(LogicalRepWorkerLock);
-
-					if (started_tx)
-					{
-						/*
-						 * We must commit the existing transaction to release
-						 * the existing locks before entering a busy loop.
-						 * This is required to avoid any undetected deadlocks
-						 * due to any existing lock as deadlock detector won't
-						 * be able to detect the waits on the latch.
-						 */
-						CommitTransactionCommand();
-						pgstat_report_stat(false);
-					}
-
-					/*
-					 * Enter busy loop and wait for synchronization worker to
-					 * reach expected state (or die trying).
-					 */
-					StartTransactionCommand();
-					started_tx = true;
-
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
-				}
-				else
-					LWLockRelease(LogicalRepWorkerLock);
-			}
-			else
-			{
-				/*
-				 * If there is no sync worker for this table yet, count
-				 * running sync workers for this subscription, while we have
-				 * the lock.
-				 */
-				int			nsyncworkers =
-					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
-
-				/* Now safe to release the LWLock */
-				LWLockRelease(LogicalRepWorkerLock);
-
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-												 MyLogicalRepWorker->dbid,
-												 MySubscription->oid,
-												 MySubscription->name,
-												 MyLogicalRepWorker->userid,
-												 rstate->relid,
-												 DSM_HANDLE_INVALID);
-						hentry->last_start_time = now;
-					}
-				}
-			}
-		}
-	}
-
-	if (started_tx)
-	{
-		/*
-		 * Even when the two_phase mode is requested by the user, it remains
-		 * as 'pending' until all tablesyncs have reached READY state.
-		 *
-		 * When this happens, we restart the apply worker and (if the
-		 * conditions are still ok) then the two_phase tri-state will become
-		 * 'enabled' at that time.
-		 *
-		 * Note: If the subscription has no tables then leave the state as
-		 * PENDING, which allows ALTER SUBSCRIPTION ... REFRESH PUBLICATION to
-		 * work.
-		 */
-		if (MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING)
-		{
-			CommandCounterIncrement();	/* make updates visible */
-			if (AllTablesyncsReady())
-			{
-				ereport(LOG,
-						(errmsg("logical replication apply worker for subscription \"%s\" will restart so that two_phase can be enabled",
-								MySubscription->name)));
-				should_exit = true;
-			}
-		}
-
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	if (should_exit)
-	{
-		/*
-		 * Reset the last-start time for this worker so that the launcher will
-		 * restart it without waiting for wal_retrieve_retry_interval.
-		 */
-		ApplyLauncherForgetWorkerStartTime(MySubscription->oid);
-
-		proc_exit(0);
-	}
-}
-
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1320,7 +857,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,77 +1098,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1717,37 +1183,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
-}
-
-/*
- * If the subscription has no tables then return false.
- *
- * Otherwise, are all tablesyncs READY?
- *
- * Note: This function is not suitable to be called from outside of apply or
- * tablesync workers because MySubscription needs to be already initialized.
- */
-bool
-AllTablesyncsReady(void)
-{
-	bool		started_tx = false;
-	bool		has_subrels = false;
-
-	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/*
-	 * Return false when there are no tables in subscription or not all tables
-	 * are in ready state; true otherwise.
-	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	SyncFinishWorker();
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 245e9be6f2..6ef21c4273 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -1026,7 +1026,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1148,7 +1148,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1204,7 +1204,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1270,7 +1270,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1405,7 +1405,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2247,7 +2247,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3645,7 +3645,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4525,7 +4525,7 @@ run_apply_worker()
 	 * work.
 	 */
 	if (MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING &&
-		AllTablesyncsReady())
+		SyncAllTablesReady())
 	{
 		/* Start streaming with two_phase enabled */
 		options.proto.logical.twophase = true;
@@ -4679,7 +4679,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..ff25874d5e 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -250,18 +250,20 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
-extern bool AllTablesyncsReady(void);
+extern bool SyncAllTablesReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool wait_for_relation_state_change(Oid relid, char expected_state);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
-- 
2.34.1

v20240819-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240819-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 6ad24cab122eb14f11648d753b8428796ec2179b Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240819 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 705 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..64214ba8d5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 7fe5fe2b86..239799f987 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -979,6 +980,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1001,6 +1038,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index d6ffef374e..b081b7249b 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1486,7 +1490,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1904,12 +1908,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c3f25582c3..21a7f67256 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10537,7 +10543,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10557,13 +10568,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10675,6 +10686,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19386,6 +19419,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index d9518a58b0..cb52303248 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 660245ed0c..d40c01e347 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -730,10 +802,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -917,10 +989,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1125,10 +1197,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1166,10 +1238,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1247,10 +1319,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1260,20 +1332,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1289,19 +1361,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1315,44 +1387,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1386,10 +1458,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1397,20 +1469,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1418,10 +1490,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1430,10 +1502,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1442,10 +1514,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1453,10 +1525,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1464,10 +1536,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1475,29 +1547,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1506,10 +1578,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1518,10 +1590,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1600,18 +1672,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1621,20 +1693,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index f68a5b5986..61f98a8b2f 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..0ce48da963 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2251,6 +2251,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240819-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240819-0005-Documentation-for-sequence-synchronization.patchDownload
From 68919c0baa6b637621e5fda0ad0722b6a2411cb8 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240819 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..28ca21a772 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 2937384b00..4aad02e1ee 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5230,10 +5230,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a23a3d57e2..ed1fb93cbf 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1677,16 +1872,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -1999,8 +2196,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2014,7 +2211,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..1b1c9994e0 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 634a4c0fab..4261637af7 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

#161Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#160)
Re: Logical Replication of sequences

Hi Vignesh, Here are my review comments for the latest patchset

v20240819-0001. No changes. No comments.
v20240819-0002. No changes. No comments.
v20240819-0003. See below.
v20240819-0004. See below.
v20240819-0005. No changes. No comments.

///////////////////////

PATCH v20240819-0003

======
src/backend/replication/logical/syncutils.c

3.1.
+typedef enum
+{
+ SYNC_RELATION_STATE_NEEDS_REBUILD,
+ SYNC_RELATION_STATE_REBUILD_STARTED,
+ SYNC_RELATION_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity =
SYNC_RELATION_STATE_NEEDS_REBUILD;

There is some muddle of singular/plural names here. The
typedef/values/var should all match:

e.g. It could be like:
SYNC_RELATION_STATE_xxx --> SYNC_RELATION_STATES_xxx
SyncingRelationsState --> SyncRelationStates

But, a more radical change might be better.

typedef enum
{
RELATION_STATES_SYNC_NEEDED,
RELATION_STATES_SYNC_STARTED,
RELATION_STATES_SYNCED,
} SyncRelationStates;

~~~

3.2. GENERAL refactoring

I don't think all of the functions moved into syncutil.c truly belong there.

This new module was introduced to be for common/util functions for
tablesync and sequencesync, but with each patchset, it has been
sucking in more and more functions that maybe do not quite belong
here.

For example, AFAIK these below have logic that is *solely* for TABLES
(not for SEQUENCES). Perhaps it was convenient to dump them here
because they are statically called, but I felt they still logically
belong in tablesync.c:
- process_syncing_tables_for_sync(XLogRecPtr current_lsn)
- process_syncing_tables_for_apply(XLogRecPtr current_lsn)
- AllTablesyncsReady(void)

~~~

3.3.
+static bool
+FetchRelationStates(bool *started_tx)
+{

If this function can remain static then the name should change to be
like fetch_table_states, right?

======
src/include/replication/worker_internal.h

3.4.
+extern bool wait_for_relation_state_change(Oid relid, char expected_state);

If this previously static function will be exposed now (it may not
need to be if some other functions are returned tablesync.c) then the
function name should also be changed, right?

////////////////////////

PATCH v20240819-0004

======
src/backend/replication/logical/syncutils.c

4.1 GENERAL refactoring

(this is similar to review comment #3.2 above)

Functions like below have logic that is *solely* for SEQUENCES (not
for TABLES). I felt they logically belong in sequencesync.c, not here.
- process_syncing_sequences_for_apply(void)

~~~

FetchRelationStates:
nit - the comment change about "not-READY tables" (instead of
relations) should be already in patch 0003.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#162vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#161)
5 attachment(s)
Re: Logical Replication of sequences

On Tue, 20 Aug 2024 at 07:27, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are my review comments for the latest patchset

v20240819-0001. No changes. No comments.
v20240819-0002. No changes. No comments.
v20240819-0003. See below.
v20240819-0004. See below.
v20240819-0005. No changes. No comments.

These comments are handled in the v20240820 version patch attached.

Regards,
Vignesh

Attachments:

v20240820-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240820-0005-Documentation-for-sequence-synchronization.patchDownload
From 41a29aca178f0233b3a36442553e9a036ac3626b Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240820 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..28ca21a772 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 2937384b00..4aad02e1ee 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5230,10 +5230,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 885a2d70ae..a0e406d91d 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1770,16 +1965,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2092,8 +2289,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2107,7 +2304,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..1b1c9994e0 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 634a4c0fab..4261637af7 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20240820-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20240820-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 3cd2f14ab333567faf741b57c41085a3e8507938 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20240820 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 181 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++----------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 9 files changed, 222 insertions(+), 189 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..077903f059 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -457,13 +457,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..50f1639736 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3d36249d8a..2381c5f5d9 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..b8f9300ea1
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,181 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+List *table_states_not_ready = NIL;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesFoSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But
+		 * if table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..ad92b84f6d 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,13 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
-
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +134,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +228,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +236,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesFoSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +294,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +332,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +359,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +375,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +513,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +605,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1320,7 +1234,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,77 +1475,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1717,7 +1560,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1735,7 +1578,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index cdea6295d8..2d99b0e116 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3716,7 +3716,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4750,7 +4750,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..24f74ab482 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesFoSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
-- 
2.34.1

v20240820-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240820-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 42d8d6af35b0e5bd7cdafc561e0da2ca62081088 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240820 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 749360a4b7..03eb1bf6fc 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19627,6 +19627,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..5ede8442b4 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240820-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240820-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From b336d126e80e919fe01ce5650a33583ceebf7aef Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240820 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 705 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..64214ba8d5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 7fe5fe2b86..239799f987 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -979,6 +980,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1001,6 +1038,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index d6ffef374e..b081b7249b 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1486,7 +1490,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1904,12 +1908,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c3f25582c3..21a7f67256 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10537,7 +10543,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10557,13 +10568,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10675,6 +10686,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19386,6 +19419,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index d9518a58b0..cb52303248 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 660245ed0c..d40c01e347 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -730,10 +802,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -917,10 +989,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1125,10 +1197,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1166,10 +1238,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1247,10 +1319,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1260,20 +1332,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1289,19 +1361,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1315,44 +1387,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1386,10 +1458,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1397,20 +1469,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1418,10 +1490,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1430,10 +1502,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1442,10 +1514,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1453,10 +1525,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1464,10 +1536,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1475,29 +1547,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1506,10 +1578,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1518,10 +1590,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1600,18 +1672,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1621,20 +1693,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index f68a5b5986..61f98a8b2f 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6d424c8918..3ea26c5cc8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2252,6 +2252,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240820-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240820-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 9a5148c9ab9c223a1d2b88a25fc708d73042817a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 20 Aug 2024 13:06:13 +0530
Subject: [PATCH v20240820 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  58 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 625 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  80 ++-
 src/backend/replication/logical/tablesync.c   |  41 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 ++++++
 src/tools/pgindent/typedefs.list              |   2 +-
 27 files changed, 1428 insertions(+), 165 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 239799f987..6b961a286b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1274,3 +1274,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 077903f059..af2bfe1364 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -468,7 +471,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +511,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -507,6 +534,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -514,7 +544,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -529,8 +559,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..3c861604e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1896,6 +1898,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index b925c464ae..bbe4346f27 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, stmt->subname,
+														publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +798,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +826,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +865,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +926,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +950,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->name,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +979,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1004,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,8 +1021,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -968,11 +1039,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,41 +1085,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1039,6 +1140,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1544,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1585,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1604,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1660,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1902,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1929,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2287,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2465,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 1086cbc962..e7a3577ee6 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -855,7 +855,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21a7f67256..5b14393015 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10817,11 +10817,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b83967cda3..86bc9d60a6 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 2381c5f5d9..a1fc571ece 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..b4a8fab9ce
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,625 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[10] = {INT8OID, INT8OID, BOOLOID, LSNOID, OIDOID,
+	INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 10, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, 5, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, 8, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, 9, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, 10, &isnull));
+	Assert(!isnull);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index b8f9300ea1..a038c5d7e7 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -34,14 +34,17 @@ typedef enum
 
 static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
 List *table_states_not_ready = NIL;
+List *sequence_states_not_ready = NIL;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -56,15 +59,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -79,7 +91,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -100,7 +114,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -112,17 +138,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(void)
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -135,16 +166,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -152,15 +186,19 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
 		/*
 		 * Does the subscription have tables?
 		 *
-		 * If there were not-READY tables found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
 		 * see if there are 0 tables.
 		 */
 		has_subtables = (table_states_not_ready != NIL) ||
@@ -177,5 +215,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index ad92b84f6d..e474ea86a7 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -159,7 +159,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -206,7 +206,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -332,7 +332,7 @@ ProcessSyncingTablesFoSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -374,9 +374,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -409,6 +406,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -422,11 +427,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -463,8 +463,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1234,7 +1234,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1552,7 +1552,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1560,7 +1560,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1574,17 +1574,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 2d99b0e116..cccc5ab122 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3715,7 +3738,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4625,8 +4651,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4705,6 +4731,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4713,14 +4743,17 @@ InitializeLogRepWorker(void)
 	CommitTransactionCommand();
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4752,6 +4785,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index af227b1f24..0d3674c33d 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3239,7 +3239,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 5ede8442b4..c9fd1fa26b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12015,6 +12015,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 24f74ab482..8058c47c63 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,28 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesFoSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +342,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..d74b9a8259 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..0d89651697
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for paraemters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 3ea26c5cc8..911c183449 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2789,13 +2789,13 @@ SupportRequestSelectivity
 SupportRequestSimplify
 SupportRequestWFuncMonotonic
 Syn
+SyncingRelationsState
 SyncOps
 SyncRepConfigData
 SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

#163Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#162)
1 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh, Here are my only review comments for the latest patch set.

v20240820-0003.

nit - missing period for comment in FetchRelationStates
nit - typo in function name 'ProcessSyncingTablesFoSync'

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20240821_SEQ_0003.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20240821_SEQ_0003.txtDownload
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index b8f9300..705a330 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -96,7 +96,7 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			ProcessSyncingTablesFoSync(current_lsn);
+			ProcessSyncingTablesForSync(current_lsn);
 			break;
 
 		case WORKERTYPE_APPLY:
@@ -143,7 +143,7 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state */
+		/* Fetch tables that are in non-ready state. */
 		rstates = GetSubscriptionRelations(MySubscription->oid, true);
 
 		/* Allocate the tracking info in a permanent memory context. */
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index ad92b84..c753f45 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -237,7 +237,7 @@ wait_for_worker_state_change(char expected_state)
  * SYNCDONE and finish.
  */
 void
-ProcessSyncingTablesFoSync(XLogRecPtr current_lsn)
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 24f74ab..6504b70 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -264,7 +264,7 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern bool FetchRelationStates(bool *started_tx);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
-extern void ProcessSyncingTablesFoSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
#164vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#163)
5 attachment(s)
Re: Logical Replication of sequences

On Wed, 21 Aug 2024 at 08:33, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are my only review comments for the latest patch set.

Thanks, these issues have been addressed in the updated version.
Additionally, I have fixed the pgindent problems that were reported
and included another advantage of this design in the file header of
the sequencesync file.

Regards,
Vignesh

Attachments:

v20240821-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20240821-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 57d5ed1d37008afcf83f22ed46b10ff01a894bca Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20240821 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 181 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++----------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 223 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..077903f059 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -457,13 +457,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..50f1639736 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3d36249d8a..2381c5f5d9 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..fdd579b639
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,181 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+List	   *table_states_not_ready = NIL;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..c753f45704 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,13 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
-
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +134,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +228,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +236,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +294,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +332,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +359,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +375,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +513,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +605,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1320,7 +1234,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,77 +1475,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1717,7 +1560,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1735,7 +1578,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 38c2895307..8933f87f0a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3716,7 +3716,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4774,7 +4774,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6504b70e4c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 3ea26c5cc8..154653f481 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2795,7 +2795,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

v20240821-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240821-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 9f74c10e684a0a7fa8e04605b80f0665f584e426 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 21 Aug 2024 10:47:35 +0530
Subject: [PATCH v20240821 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  58 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 630 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  77 ++-
 src/backend/replication/logical/tablesync.c   |  41 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 ++++++
 26 files changed, 1431 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 239799f987..6b961a286b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1274,3 +1274,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 077903f059..af2bfe1364 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -468,7 +471,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +511,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -507,6 +534,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -514,7 +544,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -529,8 +559,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..a6475af855 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a23d2c87fd..3c861604e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1896,6 +1898,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index b925c464ae..bbe4346f27 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, stmt->subname,
+														publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +798,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +826,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +865,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +926,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +950,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->name,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +979,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1004,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,8 +1021,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -968,11 +1039,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,41 +1085,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1039,6 +1140,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1544,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1585,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1604,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1660,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1902,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1929,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2287,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2465,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 1086cbc962..e7a3577ee6 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -855,7 +855,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21a7f67256..5b14393015 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10817,11 +10817,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b83967cda3..86bc9d60a6 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c566d50a07..e2d63e8214 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 2381c5f5d9..a1fc571ece 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..5b9d48b1ce
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,630 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[10] = {INT8OID, INT8OID, BOOLOID, LSNOID, OIDOID,
+	INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 10, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, 5, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, 8, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, 9, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, 10, &isnull));
+	Assert(!isnull);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index fdd579b639..9e163f0565 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -34,14 +34,17 @@ typedef enum
 
 static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
 List	   *table_states_not_ready = NIL;
+List	   *sequence_states_not_ready = NIL;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -56,15 +59,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -79,7 +91,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -100,7 +114,20 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -112,17 +139,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(void)
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -135,16 +167,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -152,7 +187,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -177,5 +216,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index c753f45704..56cb2fbffb 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -159,7 +159,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -206,7 +206,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -332,7 +332,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -374,9 +374,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -409,6 +406,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -422,11 +427,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -463,8 +463,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1234,7 +1234,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1552,7 +1552,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1560,7 +1560,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1574,17 +1574,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 8933f87f0a..e927e8d250 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3715,7 +3738,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4625,8 +4651,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4705,6 +4731,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4724,14 +4754,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4776,6 +4809,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index af227b1f24..0d3674c33d 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3239,7 +3239,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3d32ddbd7b..1a7aa21265 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 5ede8442b4..c9fd1fa26b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12015,6 +12015,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a5f8279ed..346abdcef9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4230,7 +4230,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 6504b70e4c..65206939fa 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,28 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +342,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..d74b9a8259 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..0d89651697
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for paraemters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.34.1

v20240821-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240821-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From efd35133f848cbe6a6dbdcdec842077cf5b16cbc Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 2 Aug 2024 09:25:33 +0530
Subject: [PATCH v20240821 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 461fc3f437..4f18b295dc 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19627,6 +19627,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8c1131f020..a23d2c87fd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1815,7 +1825,7 @@ pg_sequence_read_tuple(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = Int64GetDatum(seq->log_cnt);
@@ -1868,7 +1878,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1883,6 +1893,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..5ede8442b4 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3329,6 +3329,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_sequence_read_tuple', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index e749c4574e..35bbc78076 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index ea447938ae..e7cb761e74 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240821-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240821-0005-Documentation-for-sequence-synchronization.patchDownload
From 5c53e906da9ee7b5560f18e9af1ed56119fdd602 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240821 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index b654fae1b2..28ca21a772 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8102,16 +8102,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8145,7 +8148,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 2937384b00..4aad02e1ee 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5206,8 +5206,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5230,10 +5230,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 885a2d70ae..a0e406d91d 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1770,16 +1965,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2092,8 +2289,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2107,7 +2304,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..5fbb0c9c45 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..1b1c9994e0 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 634a4c0fab..4261637af7 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20240821-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240821-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 3e32ea30e988d231e434a7719bfd53999d728ff4 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 11 Jun 2024 22:26:57 +0530
Subject: [PATCH v20240821 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  24 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 705 insertions(+), 296 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..64214ba8d5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 7fe5fe2b86..239799f987 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -979,6 +980,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1001,6 +1038,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index d6ffef374e..b081b7249b 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -808,7 +812,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (stmt->for_all_tables || stmt->for_all_sequences)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
@@ -1008,7 +1012,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate the relcache. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 	{
 		CacheInvalidateRelcacheAll();
 	}
@@ -1486,7 +1490,7 @@ RemovePublicationById(Oid pubid)
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	/* Invalidate relcache so that publication info is rebuilt. */
-	if (pubform->puballtables)
+	if (pubform->puballtables || pubform->puballsequences)
 		CacheInvalidateRelcacheAll();
 
 	CatalogTupleDelete(rel, &tup->t_self);
@@ -1904,12 +1908,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c3f25582c3..21a7f67256 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -272,6 +276,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SinglePartitionSpec *singlepartspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -455,7 +460,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10537,7 +10543,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10557,13 +10568,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10675,6 +10686,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19386,6 +19419,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b6e01d3d29..fccf810192 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 4b2e5870a9..6de1a769f2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -619,6 +619,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 5bcc2244d5..51c1370314 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2950,6 +2950,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7c9a1f234c..f58dae9f13 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6229,7 +6293,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6246,16 +6310,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6353,6 +6425,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6369,6 +6442,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6382,6 +6456,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6433,6 +6511,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6440,6 +6520,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6450,6 +6532,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 024469474d..3d32ddbd7b 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3159,12 +3159,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index d9518a58b0..cb52303248 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 85a62b538e..3a5f8279ed 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4162,6 +4162,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4169,6 +4185,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3bbe4c5f97..2581b4934b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6248,9 +6248,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 660245ed0c..d40c01e347 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -730,10 +802,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -917,10 +989,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1125,10 +1197,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1166,10 +1238,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1247,10 +1319,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1260,20 +1332,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1289,19 +1361,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1315,44 +1387,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1386,10 +1458,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1397,20 +1469,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1418,10 +1490,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1430,10 +1502,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1442,10 +1514,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1453,10 +1525,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1464,10 +1536,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1475,29 +1547,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1506,10 +1578,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1518,10 +1590,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1600,18 +1672,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1621,20 +1693,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index f68a5b5986..61f98a8b2f 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6d424c8918..3ea26c5cc8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2252,6 +2252,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

#165vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#164)
5 attachment(s)
Re: Logical Replication of sequences

On Wed, 21 Aug 2024 at 11:54, vignesh C <vignesh21@gmail.com> wrote:

On Wed, 21 Aug 2024 at 08:33, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are my only review comments for the latest patch set.

Thanks, these issues have been addressed in the updated version.
Additionally, I have fixed the pgindent problems that were reported
and included another advantage of this design in the file header of
the sequencesync file.

The patch was not applied on top of head, here is a rebased version of
the patches.
I have also removed an invalidation which was not required for
sequences and a typo.

Regards,
Vignesh

Attachments:

v20240920-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20240920-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 7dc19b2644091c6f6bc0aa6d27d32c39cd7a579c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20240920 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 6f75bd0c7d..195c5a9002 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19630,6 +19630,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0188e8bbd5..6d9451b641 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1817,7 +1827,7 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
@@ -1870,7 +1880,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1885,6 +1895,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 43f608d7a0..32d72872b0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3365,6 +3365,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20240920-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20240920-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From d68caaa04c5bfd2a0d95c34dc2ea20ee32a60225 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:46:27 +0530
Subject: [PATCH v20240920 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  18 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.c               |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 702 insertions(+), 293 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..64214ba8d5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 7fe5fe2b86..239799f987 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -979,6 +980,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1001,6 +1038,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index d6ffef374e..9883390190 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1904,12 +1908,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index ab304ca989..011e3072a8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -271,6 +275,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -454,7 +459,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10520,7 +10526,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10540,13 +10551,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10658,6 +10669,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19367,6 +19400,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 130b80775d..4b839c9e5c 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4293,23 +4294,29 @@ getPublications(Archive *fout)
 	resetPQExpBuffer(query);
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4321,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4340,6 +4348,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4387,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 9f907ed5ad..7a44cb2cf5 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -621,6 +621,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index ab6c830491..b364a95759 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2986,6 +2986,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index faabecbc76..044eeb6281 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6237,7 +6301,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6254,16 +6318,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6361,6 +6433,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6377,6 +6450,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6390,6 +6464,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6441,6 +6519,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6448,6 +6528,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6458,6 +6540,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index a7ccde6d7d..c1c25b5083 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3145,12 +3145,12 @@ psql_completion(const char *text, int start, int end)
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index d9518a58b0..cb52303248 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index e62ce1b753..227214b675 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4160,6 +4160,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4167,6 +4183,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 6aeb7cb963..52c4b29f4c 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6303,9 +6303,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 660245ed0c..d40c01e347 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -730,10 +802,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -917,10 +989,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1125,10 +1197,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1166,10 +1238,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1247,10 +1319,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1260,20 +1332,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1289,19 +1361,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1315,44 +1387,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1386,10 +1458,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1397,20 +1469,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1418,10 +1490,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1430,10 +1502,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1442,10 +1514,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1453,10 +1525,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1464,10 +1536,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1475,29 +1547,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1506,10 +1578,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1518,10 +1590,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1600,18 +1672,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1621,20 +1693,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index f68a5b5986..61f98a8b2f 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ace5414fa5..10c25c2788 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2254,6 +2254,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20240920-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20240920-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 2245e14bf3c408431dd25d7328c5c5713cac7be0 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20240920 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 181 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++----------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 223 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..077903f059 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -457,13 +457,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..50f1639736 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3d36249d8a..2381c5f5d9 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..fdd579b639
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,181 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+List	   *table_states_not_ready = NIL;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..c753f45704 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,13 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
-
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +134,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +228,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +236,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +294,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +332,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +359,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +375,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +513,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +605,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1320,7 +1234,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,77 +1475,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1717,7 +1560,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1735,7 +1578,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 925dff9cc4..6f0cf34eb1 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4775,7 +4775,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6504b70e4c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 10c25c2788..d89e2d2693 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2795,7 +2795,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

v20240920-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20240920-0005-Documentation-for-sequence-synchronization.patchDownload
From 1c03b607836a5ff8e21ac1b04c91135cbe52fdec Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20240920 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index bfb97865e1..96ed9a8d04 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8113,16 +8113,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8156,7 +8159,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0aec11f443..880ca10a88 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5202,8 +5202,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5226,10 +5226,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index df62eb45ff..2211650c9e 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1567,6 +1567,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1873,16 +2068,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2195,8 +2392,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2210,7 +2407,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index a2fda4677d..79f75e4303 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 740b7d9421..1b1c9994e0 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 634a4c0fab..4261637af7 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20240920-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20240920-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 23d51ae243e22c8a6c446a289d3e6ee2dd47bd16 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 21 Aug 2024 10:47:35 +0530
Subject: [PATCH v20240920 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  58 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 630 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  76 ++-
 src/backend/replication/logical/tablesync.c   |  41 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.c                   |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 ++++++
 26 files changed, 1430 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 239799f987..6b961a286b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1274,3 +1274,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 077903f059..af2bfe1364 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -468,7 +471,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +511,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -507,6 +534,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -514,7 +544,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -529,8 +559,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 7fd5d256a1..ab2f234f26 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 6d9451b641..c53dfece26 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1898,6 +1900,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 02ccc636b8..e9ec53314e 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, stmt->subname,
+														publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +798,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +826,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +865,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +926,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +950,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn, sub->name,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +979,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1004,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,8 +1021,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -968,11 +1039,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,41 +1085,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1039,6 +1140,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1544,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1585,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1604,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1660,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1902,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1929,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2287,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2465,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 1086cbc962..e7a3577ee6 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -855,7 +855,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 011e3072a8..20bed17c6c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10800,11 +10800,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 07bc5517fc..5988ea39bd 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index e5fdca8bbf..c7f1ff51d6 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 2381c5f5d9..a1fc571ece 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..5b9d48b1ce
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,630 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[10] = {INT8OID, INT8OID, BOOLOID, LSNOID, OIDOID,
+	INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, 10, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, 4, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, 5, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, 6, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, 7, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, 8, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, 9, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, 10, &isnull));
+	Assert(!isnull);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index fdd579b639..0bb47bfa74 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -34,14 +34,17 @@ typedef enum
 
 static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
 List	   *table_states_not_ready = NIL;
+List	   *sequence_states_not_ready = NIL;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -56,15 +59,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -79,7 +91,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -100,7 +114,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -112,17 +138,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(void)
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -135,16 +166,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -152,7 +186,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -177,5 +215,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index c753f45704..56cb2fbffb 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -159,7 +159,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -206,7 +206,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -332,7 +332,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -374,9 +374,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -409,6 +406,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -422,11 +427,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -463,8 +463,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1234,7 +1234,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1552,7 +1552,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1560,7 +1560,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1574,17 +1574,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6f0cf34eb1..82cc671cb8 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4626,8 +4652,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4706,6 +4732,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4725,14 +4755,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4777,6 +4810,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 686309db58..4a441b35da 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3239,7 +3239,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index c1c25b5083..5d3c1dd6d6 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1936,7 +1936,7 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (HeadMatches("ALTER", "SUBSCRIPTION", MatchAny) &&
 			 TailMatches("REFRESH", "PUBLICATION", "WITH", "("))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 32d72872b0..0adc068e69 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12070,6 +12070,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 227214b675..e888cb1568 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4228,7 +4228,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 6504b70e4c..65206939fa 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,28 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +342,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index a1626f3fae..93d0b10026 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..b4734d0368
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.34.1

#166shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#165)
Re: Logical Replication of sequences

On Fri, Sep 20, 2024 at 9:36 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 21 Aug 2024 at 11:54, vignesh C <vignesh21@gmail.com> wrote:

On Wed, 21 Aug 2024 at 08:33, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are my only review comments for the latest patch set.

Thanks, these issues have been addressed in the updated version.
Additionally, I have fixed the pgindent problems that were reported
and included another advantage of this design in the file header of
the sequencesync file.

The patch was not applied on top of head, here is a rebased version of
the patches.
I have also removed an invalidation which was not required for
sequences and a typo.

Thank You for the patches. I would like to understand srsublsn and
page_lsn more. Please see the scenario below:

I have a sequence:
CREATE SEQUENCE myseq0 INCREMENT 5 START 100;

After refresh on sub:
postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
ALTER SUBSCRIPTION

postgres=# select * from pg_subscription_rel;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16385 | 16384 | r | 0/152F380 -->pub's page_lsn

postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 105 | 31 | t -->(I am assuming 0/152D830 is
local page_lsn corresponding to value-=105)

Now I assume that *only* after doing next_wal for 31 times, page_lsn
shall change. But I observe strange behaviour

After running nextval on sub for 7 times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 140 | 24 | t -->correct

After running nextval on sub for 15 more times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 215 | 9 | t -->correct
(1 row)

Now after running it 6 more times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D990 | 245 | 28 | t --> how??

last_value increased in the expected way (6*5), but page_lsn changed
and log_cnt changed before we could complete the remaining runs as
well. Not sure why??

Now if I do refresh again:

postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
ALTER SUBSCRIPTION

postgres=# select * from pg_subscription_rel;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16385 | 16384 | r | 0/152F380-->pub's page_lsn, same as old one.

postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152DDB8 | 105 | 31 | t
(1 row)

Now, what is this page_lsn = 0/152DDB8? Should it be the one
corresponding to last_value=105 and thus shouldn't it match the
previous value of 0/152D830?

thanks
Shveta

#167vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#166)
Re: Logical Replication of sequences

On Thu, 26 Sept 2024 at 11:07, shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Sep 20, 2024 at 9:36 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 21 Aug 2024 at 11:54, vignesh C <vignesh21@gmail.com> wrote:

On Wed, 21 Aug 2024 at 08:33, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are my only review comments for the latest patch set.

Thanks, these issues have been addressed in the updated version.
Additionally, I have fixed the pgindent problems that were reported
and included another advantage of this design in the file header of
the sequencesync file.

The patch was not applied on top of head, here is a rebased version of
the patches.
I have also removed an invalidation which was not required for
sequences and a typo.

Thank You for the patches. I would like to understand srsublsn and
page_lsn more. Please see the scenario below:

I have a sequence:
CREATE SEQUENCE myseq0 INCREMENT 5 START 100;

After refresh on sub:
postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
ALTER SUBSCRIPTION

postgres=# select * from pg_subscription_rel;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16385 | 16384 | r | 0/152F380 -->pub's page_lsn

postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 105 | 31 | t -->(I am assuming 0/152D830 is
local page_lsn corresponding to value-=105)

Now I assume that *only* after doing next_wal for 31 times, page_lsn
shall change. But I observe strange behaviour

After running nextval on sub for 7 times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 140 | 24 | t -->correct

After running nextval on sub for 15 more times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 215 | 9 | t -->correct
(1 row)

Now after running it 6 more times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D990 | 245 | 28 | t --> how??

last_value increased in the expected way (6*5), but page_lsn changed
and log_cnt changed before we could complete the remaining runs as
well. Not sure why??

This can occur if a checkpoint happened at that time. The regression
test also has specific handling for this, as noted in a comment within
the sequence.sql test file:
-- log_cnt can be higher if there is a checkpoint just at the right
-- time

Now if I do refresh again:

postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
ALTER SUBSCRIPTION

postgres=# select * from pg_subscription_rel;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16385 | 16384 | r | 0/152F380-->pub's page_lsn, same as old one.

postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152DDB8 | 105 | 31 | t
(1 row)

Now, what is this page_lsn = 0/152DDB8? Should it be the one
corresponding to last_value=105 and thus shouldn't it match the
previous value of 0/152D830?

After executing REFRESH PUBLICATION SEQUENCES, the publication value
will be resynchronized, and a new LSN will be generated and updated
for the publisher sequence (using the old value). Therefore, this is
not a concern.

Regards,
Vignesh

#168shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#167)
Re: Logical Replication of sequences

On Sun, Sep 29, 2024 at 12:34 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 26 Sept 2024 at 11:07, shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Sep 20, 2024 at 9:36 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 21 Aug 2024 at 11:54, vignesh C <vignesh21@gmail.com> wrote:

On Wed, 21 Aug 2024 at 08:33, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are my only review comments for the latest patch set.

Thanks, these issues have been addressed in the updated version.
Additionally, I have fixed the pgindent problems that were reported
and included another advantage of this design in the file header of
the sequencesync file.

The patch was not applied on top of head, here is a rebased version of
the patches.
I have also removed an invalidation which was not required for
sequences and a typo.

Thank You for the patches. I would like to understand srsublsn and
page_lsn more. Please see the scenario below:

I have a sequence:
CREATE SEQUENCE myseq0 INCREMENT 5 START 100;

After refresh on sub:
postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
ALTER SUBSCRIPTION

postgres=# select * from pg_subscription_rel;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16385 | 16384 | r | 0/152F380 -->pub's page_lsn

postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 105 | 31 | t -->(I am assuming 0/152D830 is
local page_lsn corresponding to value-=105)

Now I assume that *only* after doing next_wal for 31 times, page_lsn
shall change. But I observe strange behaviour

After running nextval on sub for 7 times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 140 | 24 | t -->correct

After running nextval on sub for 15 more times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 215 | 9 | t -->correct
(1 row)

Now after running it 6 more times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D990 | 245 | 28 | t --> how??

last_value increased in the expected way (6*5), but page_lsn changed
and log_cnt changed before we could complete the remaining runs as
well. Not sure why??

This can occur if a checkpoint happened at that time. The regression
test also has specific handling for this, as noted in a comment within
the sequence.sql test file:
-- log_cnt can be higher if there is a checkpoint just at the right
-- time

Okay. I see. I tried by executing 'checkpoint' and can see the same behaviour.

Now if I do refresh again:

postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
ALTER SUBSCRIPTION

postgres=# select * from pg_subscription_rel;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16385 | 16384 | r | 0/152F380-->pub's page_lsn, same as old one.

postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152DDB8 | 105 | 31 | t
(1 row)

Now, what is this page_lsn = 0/152DDB8? Should it be the one
corresponding to last_value=105 and thus shouldn't it match the
previous value of 0/152D830?

After executing REFRESH PUBLICATION SEQUENCES, the publication value
will be resynchronized, and a new LSN will be generated and updated
for the publisher sequence (using the old value). Therefore, this is
not a concern.

Okay.

Few comments:

1)
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)

--fetch_sequence_list() is not using the argument subanme anywhere.

2)

+ if (resync_all_sequences)
+ {
+ ereport(DEBUG1,
+ errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name));
+ UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+    InvalidXLogRecPtr);
+ }

--Shall we have DEBUG1 after we are done with
UpdateSubscriptionRelState? Otherwise we may end up putting this log
statement, even if the update fails for some reason.

3)
fetch_remote_sequence_data():

Should we have a macro REMOTE_SEQ_COL_COUNT 10 and use it instead of
direct 10. Also instead of having 1,2,3 etc in slot_getattr, we can
have ++col and at the end we can have:
Assert(col == REMOTE_SEQ_COL_COUNT);

thanks
Shveta

#169vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#168)
5 attachment(s)
Re: Logical Replication of sequences

On Fri, 4 Oct 2024 at 15:39, shveta malik <shveta.malik@gmail.com> wrote:

On Sun, Sep 29, 2024 at 12:34 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 26 Sept 2024 at 11:07, shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Sep 20, 2024 at 9:36 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 21 Aug 2024 at 11:54, vignesh C <vignesh21@gmail.com> wrote:

On Wed, 21 Aug 2024 at 08:33, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are my only review comments for the latest patch set.

Thanks, these issues have been addressed in the updated version.
Additionally, I have fixed the pgindent problems that were reported
and included another advantage of this design in the file header of
the sequencesync file.

The patch was not applied on top of head, here is a rebased version of
the patches.
I have also removed an invalidation which was not required for
sequences and a typo.

Thank You for the patches. I would like to understand srsublsn and
page_lsn more. Please see the scenario below:

I have a sequence:
CREATE SEQUENCE myseq0 INCREMENT 5 START 100;

After refresh on sub:
postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
ALTER SUBSCRIPTION

postgres=# select * from pg_subscription_rel;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16385 | 16384 | r | 0/152F380 -->pub's page_lsn

postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 105 | 31 | t -->(I am assuming 0/152D830 is
local page_lsn corresponding to value-=105)

Now I assume that *only* after doing next_wal for 31 times, page_lsn
shall change. But I observe strange behaviour

After running nextval on sub for 7 times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 140 | 24 | t -->correct

After running nextval on sub for 15 more times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 215 | 9 | t -->correct
(1 row)

Now after running it 6 more times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D990 | 245 | 28 | t --> how??

last_value increased in the expected way (6*5), but page_lsn changed
and log_cnt changed before we could complete the remaining runs as
well. Not sure why??

This can occur if a checkpoint happened at that time. The regression
test also has specific handling for this, as noted in a comment within
the sequence.sql test file:
-- log_cnt can be higher if there is a checkpoint just at the right
-- time

Okay. I see. I tried by executing 'checkpoint' and can see the same behaviour.

Now if I do refresh again:

postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
ALTER SUBSCRIPTION

postgres=# select * from pg_subscription_rel;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16385 | 16384 | r | 0/152F380-->pub's page_lsn, same as old one.

postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152DDB8 | 105 | 31 | t
(1 row)

Now, what is this page_lsn = 0/152DDB8? Should it be the one
corresponding to last_value=105 and thus shouldn't it match the
previous value of 0/152D830?

After executing REFRESH PUBLICATION SEQUENCES, the publication value
will be resynchronized, and a new LSN will be generated and updated
for the publisher sequence (using the old value). Therefore, this is
not a concern.

Okay.

Few comments:

1)
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)

--fetch_sequence_list() is not using the argument subanme anywhere.

2)

+ if (resync_all_sequences)
+ {
+ ereport(DEBUG1,
+ errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name));
+ UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+    InvalidXLogRecPtr);
+ }

--Shall we have DEBUG1 after we are done with
UpdateSubscriptionRelState? Otherwise we may end up putting this log
statement, even if the update fails for some reason.

3)
fetch_remote_sequence_data():

Should we have a macro REMOTE_SEQ_COL_COUNT 10 and use it instead of
direct 10. Also instead of having 1,2,3 etc in slot_getattr, we can
have ++col and at the end we can have:
Assert(col == REMOTE_SEQ_COL_COUNT);

Thanks for the comments, these are addressed in the attached patch.

Regards,
Vignesh

Attachments:

v20241008-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20241008-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 69389f4a28d90bb806c292430e13b43398564aac Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20241008 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 +++++++
 src/backend/commands/sequence.c        | 94 +++++++++++++++++++++++---
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 ++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 7b4fbb5047..043062ced4 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19632,6 +19632,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0188e8bbd5..6d9451b641 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -102,7 +103,8 @@ static Relation lock_and_open_sequence(SeqTable seq);
 static void create_seq_hashtable(void);
 static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
 static Form_pg_sequence_data read_seq_tuple(Relation rel,
-											Buffer *buf, HeapTuple seqdatatuple);
+											Buffer *buf, HeapTuple seqdatatuple,
+											XLogRecPtr *lsn_ret);
 static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool isInit,
 						Form_pg_sequence seqform,
@@ -277,7 +279,7 @@ ResetSequence(Oid seq_relid)
 	 * indeed a sequence.
 	 */
 	init_sequence(seq_relid, &elm, &seq_rel);
-	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seq_rel, &buf, &seqdatatuple, NULL);
 
 	pgstuple = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seq_relid));
 	if (!HeapTupleIsValid(pgstuple))
@@ -476,7 +478,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	seqform = (Form_pg_sequence) GETSTRUCT(seqtuple);
 
 	/* lock page buffer and read tuple into new sequence structure */
-	(void) read_seq_tuple(seqrel, &buf, &datatuple);
+	(void) read_seq_tuple(seqrel, &buf, &datatuple, NULL);
 
 	/* copy the existing sequence data tuple, so it can be modified locally */
 	newdatatuple = heap_copytuple(&datatuple);
@@ -558,7 +560,7 @@ SequenceChangePersistence(Oid relid, char newrelpersistence)
 	if (RelationNeedsWAL(seqrel))
 		GetTopTransactionId();
 
-	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	(void) read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	RelationSetNewRelfilenumber(seqrel, newrelpersistence);
 	fill_seq_with_data(seqrel, &seqdatatuple);
 	UnlockReleaseBuffer(buf);
@@ -687,7 +689,7 @@ nextval_internal(Oid relid, bool check_permissions)
 	ReleaseSysCache(pgstuple);
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 	page = BufferGetPage(buf);
 
 	last = next = result = seq->last_value;
@@ -983,7 +985,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	PreventCommandIfParallelMode("setval()");
 
 	/* lock page buffer and read tuple */
-	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+	seq = read_seq_tuple(seqrel, &buf, &seqdatatuple, NULL);
 
 	if ((next < minv) || (next > maxv))
 		ereport(ERROR,
@@ -1183,11 +1185,15 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
  * *buf receives the reference to the pinned-and-ex-locked buffer
  * *seqdatatuple receives the reference to the sequence tuple proper
  *		(this arg should point to a local variable of type HeapTupleData)
+ * *lsn_ret will be set to the page LSN if the caller requested it.
+ *		This allows the caller to determine which sequence changes are
+ *		before/after the returned sequence state.
  *
  * Function's return value points to the data payload of the tuple
  */
 static Form_pg_sequence_data
-read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
+read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple,
+			   XLogRecPtr *lsn_ret)
 {
 	Page		page;
 	ItemId		lp;
@@ -1204,6 +1210,10 @@ read_seq_tuple(Relation rel, Buffer *buf, HeapTuple seqdatatuple)
 		elog(ERROR, "bad magic number in sequence \"%s\": %08X",
 			 RelationGetRelationName(rel), sm->magic);
 
+	/* If the caller requested it, return the page LSN. */
+	if (lsn_ret)
+		*lsn_ret = PageGetLSN(page);
+
 	lp = PageGetItemId(page, FirstOffsetNumber);
 	Assert(ItemIdIsNormal(lp));
 
@@ -1817,7 +1827,7 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
@@ -1870,7 +1880,7 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
 
-		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		seq = read_seq_tuple(seqrel, &buf, &seqtuple, NULL);
 
 		is_called = seq->is_called;
 		result = seq->last_value;
@@ -1885,6 +1895,72 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple, &lsn);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 77f54a79e6..81e76642b0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3365,6 +3365,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20241008-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20241008-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 3f1c1a5c1e4529352d385999bba623b91025a497 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 8 Oct 2024 10:41:42 +0530
Subject: [PATCH v20241008 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  58 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 636 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  76 ++-
 src/backend/replication/logical/tablesync.c   |  41 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 +++++
 26 files changed, 1436 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 239799f987..6b961a286b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1274,3 +1274,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 077903f059..af2bfe1364 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -431,7 +432,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -468,7 +471,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -480,8 +484,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -493,12 +511,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -507,6 +534,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -514,7 +544,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -529,8 +559,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3456b821bc..88ec8735af 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 6d9451b641..c53dfece26 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -111,7 +111,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -942,9 +941,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -995,7 +997,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1012,8 +1014,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,8 +1046,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1053,14 +1055,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1069,7 +1071,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1898,6 +1900,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 02ccc636b8..07377bdb9a 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -723,6 +725,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -734,9 +742,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -751,6 +756,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -762,13 +771,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -785,6 +797,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -808,7 +825,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,12 +864,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -870,6 +925,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -889,10 +949,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -911,9 +978,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -935,12 +1003,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -951,8 +1020,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -968,11 +1038,32 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -994,41 +1085,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1039,6 +1140,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1424,8 +1529,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1439,7 +1544,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1479,8 +1585,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1498,13 +1604,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1539,7 +1660,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1781,7 +1902,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1804,7 +1929,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2162,11 +2287,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2336,6 +2465,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	get_publications_str(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 54025c9f15..573c7eb26b 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -859,7 +859,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index ffeb150697..031999d32f 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10824,11 +10824,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 07bc5517fc..5988ea39bd 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index e5fdca8bbf..c7f1ff51d6 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 2381c5f5d9..a1fc571ece 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..6988b62a1b
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,636 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index fdd579b639..0bb47bfa74 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -34,14 +34,17 @@ typedef enum
 
 static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
 List	   *table_states_not_ready = NIL;
+List	   *sequence_states_not_ready = NIL;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -56,15 +59,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -79,7 +91,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -100,7 +114,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -112,17 +138,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(void)
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -135,16 +166,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -152,7 +186,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -177,5 +215,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index c753f45704..56cb2fbffb 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -159,7 +159,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -206,7 +206,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -332,7 +332,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -374,9 +374,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -409,6 +406,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -422,11 +427,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -463,8 +463,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1234,7 +1234,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1552,7 +1552,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1560,7 +1560,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1574,17 +1574,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6f0cf34eb1..82cc671cb8 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4626,8 +4652,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4706,6 +4732,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4725,14 +4755,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4777,6 +4810,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 686309db58..4a441b35da 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3239,7 +3239,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index e066cc1b6b..e5cbe78243 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2267,7 +2267,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 81e76642b0..12bd10a472 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12083,6 +12083,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 461581ca67..72b9b9c044 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4243,7 +4243,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 6504b70e4c..65206939fa 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,28 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +342,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2b47013f11..72a96049c7 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..b4734d0368
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.34.1

v20241008-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20241008-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 3bac11d443601cce2c6564574ba6871bd4977167 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 8 Oct 2024 10:38:27 +0530
Subject: [PATCH v20241008 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  18 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  24 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 702 insertions(+), 293 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fd9c5deac9..64214ba8d5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -158,6 +163,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -240,10 +255,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -259,8 +274,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -402,6 +418,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 7fe5fe2b86..239799f987 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -979,6 +980,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1001,6 +1038,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index d6ffef374e..9883390190 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1904,12 +1908,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 4aa8646af7..ffeb150697 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -215,6 +215,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -271,6 +275,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -454,7 +459,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -590,6 +595,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10544,7 +10550,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10564,13 +10575,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10682,6 +10693,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19391,6 +19424,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	Assert(all_tables && *all_tables == false);
+	Assert(all_sequences && *all_sequences == false);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 1b47c388ce..11163b5396 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4291,23 +4292,29 @@ getPublications(Archive *fout)
 	query = createPQExpBuffer();
 
 	/* Get the publications. */
-	if (fout->remoteVersion >= 130000)
+	if (fout->remoteVersion >= 180000)
+		appendPQExpBufferStr(query,
+							 "SELECT p.tableoid, p.oid, p.pubname, "
+							 "p.pubowner, "
+							 "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "FROM pg_publication p");
+	else if (fout->remoteVersion >= 130000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
 							 "FROM pg_publication p");
 	else if (fout->remoteVersion >= 110000)
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 	else
 		appendPQExpBufferStr(query,
 							 "SELECT p.tableoid, p.oid, p.pubname, "
 							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+							 "p.puballtables, false as p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
 							 "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4322,6 +4329,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4341,6 +4349,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4390,8 +4400,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 9f907ed5ad..7a44cb2cf5 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -621,6 +621,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index ab6c830491..b364a95759 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2986,6 +2986,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 6a36c91083..954d1c31c2 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2053,6 +2111,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6237,7 +6301,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6254,16 +6318,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6361,6 +6433,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6377,6 +6450,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6390,6 +6464,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6441,6 +6519,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6448,6 +6528,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6458,6 +6540,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index b4efb127dc..e066cc1b6b 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3474,12 +3474,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index d9518a58b0..cb52303248 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1c314cd907..461581ca67 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4175,6 +4175,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4182,6 +4198,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3819bf5e25..0c05718881 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6350,9 +6350,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 660245ed0c..d40c01e347 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -730,10 +802,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -917,10 +989,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1125,10 +1197,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1166,10 +1238,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1247,10 +1319,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1260,20 +1332,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1289,19 +1361,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1315,44 +1387,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1386,10 +1458,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1397,20 +1469,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1418,10 +1490,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1430,10 +1502,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1442,10 +1514,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1453,10 +1525,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1464,10 +1536,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1475,29 +1547,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1506,10 +1578,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1518,10 +1590,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1600,18 +1672,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1621,20 +1693,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index f68a5b5986..61f98a8b2f 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a65e1c07c5..d74634087e 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2254,6 +2254,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20241008-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20241008-0005-Documentation-for-sequence-synchronization.patchDownload
From 07de63c2597b418577187cea4f00d96eb84cd03c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20241008 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 964c819a02..aa873b4df6 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8113,16 +8113,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8156,7 +8159,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 9707d5238d..4c28ac6235 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5202,8 +5202,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5226,10 +5226,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 98a7ad0c27..9ab6bf39fc 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1567,6 +1567,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1873,16 +2068,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2195,8 +2392,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2210,7 +2407,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 331315f8d3..d9807dd3e0 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 8a3096e62b..f30ab989f2 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 61d28e701f..65bd5dc927 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20241008-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20241008-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 5b11557679d3ce50c5948bb212fc660acf790e16 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20241008 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 181 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++----------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 223 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 9efc9159f2..077903f059 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -457,13 +457,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..50f1639736 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3d36249d8a..2381c5f5d9 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..fdd579b639
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,181 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+List	   *table_states_not_ready = NIL;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e03e761392..c753f45704 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,13 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
-
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +134,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +228,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +236,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +294,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +332,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +359,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +375,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +513,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +605,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1320,7 +1234,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,77 +1475,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1717,7 +1560,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1735,7 +1578,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 925dff9cc4..6f0cf34eb1 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4775,7 +4775,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6504b70e4c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d74634087e..9d0b5d1446 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2795,7 +2795,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

#170Masahiko Sawada
sawada.mshk@gmail.com
In reply to: vignesh C (#169)
Re: Logical Replication of sequences

On Tue, Oct 8, 2024 at 2:46 AM vignesh C <vignesh21@gmail.com> wrote:

On Fri, 4 Oct 2024 at 15:39, shveta malik <shveta.malik@gmail.com> wrote:

On Sun, Sep 29, 2024 at 12:34 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 26 Sept 2024 at 11:07, shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Sep 20, 2024 at 9:36 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 21 Aug 2024 at 11:54, vignesh C <vignesh21@gmail.com> wrote:

On Wed, 21 Aug 2024 at 08:33, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are my only review comments for the latest patch set.

Thanks, these issues have been addressed in the updated version.
Additionally, I have fixed the pgindent problems that were reported
and included another advantage of this design in the file header of
the sequencesync file.

The patch was not applied on top of head, here is a rebased version of
the patches.
I have also removed an invalidation which was not required for
sequences and a typo.

Thank You for the patches. I would like to understand srsublsn and
page_lsn more. Please see the scenario below:

I have a sequence:
CREATE SEQUENCE myseq0 INCREMENT 5 START 100;

After refresh on sub:
postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
ALTER SUBSCRIPTION

postgres=# select * from pg_subscription_rel;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16385 | 16384 | r | 0/152F380 -->pub's page_lsn

postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 105 | 31 | t -->(I am assuming 0/152D830 is
local page_lsn corresponding to value-=105)

Now I assume that *only* after doing next_wal for 31 times, page_lsn
shall change. But I observe strange behaviour

After running nextval on sub for 7 times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 140 | 24 | t -->correct

After running nextval on sub for 15 more times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D830 | 215 | 9 | t -->correct
(1 row)

Now after running it 6 more times:
postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152D990 | 245 | 28 | t --> how??

last_value increased in the expected way (6*5), but page_lsn changed
and log_cnt changed before we could complete the remaining runs as
well. Not sure why??

This can occur if a checkpoint happened at that time. The regression
test also has specific handling for this, as noted in a comment within
the sequence.sql test file:
-- log_cnt can be higher if there is a checkpoint just at the right
-- time

Okay. I see. I tried by executing 'checkpoint' and can see the same behaviour.

Now if I do refresh again:

postgres=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
ALTER SUBSCRIPTION

postgres=# select * from pg_subscription_rel;
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+-----------
16385 | 16384 | r | 0/152F380-->pub's page_lsn, same as old one.

postgres=# select * from pg_sequence_state('myseq0');
page_lsn | last_value | log_cnt | is_called
-----------+------------+---------+-----------
0/152DDB8 | 105 | 31 | t
(1 row)

Now, what is this page_lsn = 0/152DDB8? Should it be the one
corresponding to last_value=105 and thus shouldn't it match the
previous value of 0/152D830?

After executing REFRESH PUBLICATION SEQUENCES, the publication value
will be resynchronized, and a new LSN will be generated and updated
for the publisher sequence (using the old value). Therefore, this is
not a concern.

Okay.

Few comments:

1)
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, char *subname, List *publications)

--fetch_sequence_list() is not using the argument subanme anywhere.

2)

+ if (resync_all_sequences)
+ {
+ ereport(DEBUG1,
+ errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name));
+ UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+    InvalidXLogRecPtr);
+ }

--Shall we have DEBUG1 after we are done with
UpdateSubscriptionRelState? Otherwise we may end up putting this log
statement, even if the update fails for some reason.

3)
fetch_remote_sequence_data():

Should we have a macro REMOTE_SEQ_COL_COUNT 10 and use it instead of
direct 10. Also instead of having 1,2,3 etc in slot_getattr, we can
have ++col and at the end we can have:
Assert(col == REMOTE_SEQ_COL_COUNT);

Thanks for the comments, these are addressed in the attached patch.

Here are comments on the 0001 and 0002 patches:

0001 patch:

read_seq_tuple() reads a buffer and acquires a lock on it, and the
buffer is returned to the caller while being locked. So I think it's
possible for the caller to get the page LSN even without changes.
Since pg_sequence_state() is the sole caller that requests lsn_ret to
be set, I think the changes of read_seq_tuples() is not necessarily
necessary.

0002 patch:
+        Assert(all_tables && *all_tables == false);
+        Assert(all_sequences && *all_sequences == false);

I think it's better to set both *all_tables and *all_sequence to false
at the beginning of the function to ensure this function works as
expected regardless of their initial values.

---
        appendPQExpBufferStr(query,
                             "SELECT p.tableoid, p.oid, p.pubname, "
                             "p.pubowner, "
-                            "p.puballtables, p.pubinsert,
p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+                            "p.puballtables, false as
p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete,
p.pubtruncate, p.pubviaroot "
                             "FROM pg_publication p");
    else if (fout->remoteVersion >= 110000)
        appendPQExpBufferStr(query,
                             "SELECT p.tableoid, p.oid, p.pubname, "
                             "p.pubowner, "
-                            "p.puballtables, p.pubinsert,
p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+                            "p.puballtables, false as
p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete,
p.pubtruncate, false AS pubviaroot "
                             "FROM pg_publication p");
    else
        appendPQExpBufferStr(query,
                             "SELECT p.tableoid, p.oid, p.pubname, "
                             "p.pubowner, "
-                            "p.puballtables, p.pubinsert,
p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+                            "p.puballtables, false as
p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS
pubtruncate, false AS pubviaroot "
                             "FROM pg_publication p");

The column name should be puballsequences, not p.puballsequences.

---
IIUC the changes of describeOneTableDetails() includes two kinds of
changes: refactoring to use printTable() instead of printQuery(), and
adding publications that includes the sequence. Is the first
refactoring necessary for the second change? If not, should it be done
in a separate patch?
fg
Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#171vignesh C
vignesh21@gmail.com
In reply to: Masahiko Sawada (#170)
5 attachment(s)
Re: Logical Replication of sequences

On Thu, 24 Oct 2024 at 04:24, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Here are comments on the 0001 and 0002 patches:

0001 patch:

read_seq_tuple() reads a buffer and acquires a lock on it, and the
buffer is returned to the caller while being locked. So I think it's
possible for the caller to get the page LSN even without changes.
Since pg_sequence_state() is the sole caller that requests lsn_ret to
be set, I think the changes of read_seq_tuples() is not necessarily
necessary.

Modified

0002 patch:
+        Assert(all_tables && *all_tables == false);
+        Assert(all_sequences && *all_sequences == false);

I think it's better to set both *all_tables and *all_sequence to false
at the beginning of the function to ensure this function works as
expected regardless of their initial values.

Modified

---
appendPQExpBufferStr(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
-                            "p.puballtables, p.pubinsert,
p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+                            "p.puballtables, false as
p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete,
p.pubtruncate, p.pubviaroot "
"FROM pg_publication p");
else if (fout->remoteVersion >= 110000)
appendPQExpBufferStr(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
-                            "p.puballtables, p.pubinsert,
p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+                            "p.puballtables, false as
p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete,
p.pubtruncate, false AS pubviaroot "
"FROM pg_publication p");
else
appendPQExpBufferStr(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
-                            "p.puballtables, p.pubinsert,
p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+                            "p.puballtables, false as
p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS
pubtruncate, false AS pubviaroot "
"FROM pg_publication p");

The column name should be puballsequences, not p.puballsequences.

Modified

---
IIUC the changes of describeOneTableDetails() includes two kinds of
changes: refactoring to use printTable() instead of printQuery(), and
adding publications that includes the sequence. Is the first
refactoring necessary for the second change? If not, should it be done
in a separate patch?

We are adding publication titles as footers to the sequence
description, each with different publication names. Since the number
of publications is unknown in advance, we will first determine the
total number of publications, then allocate the necessary size for the
footers. We will append footers[0] with either 'Owned by' or 'Sequence
for identity column,' followed by the publication titles.
Additionally, these will be subject to version checks. In this case,
using printTable instead of printQuery is preferable, as it simplifies
the code.

The attached patch has the changes for the fixes.

Regards,
Vignesh

Attachments:

v20241031-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20241031-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From c45f4540ced1e54990c29772a8137681abd5b1b4 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20241031 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 05f630c6a6..5132283157 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19632,6 +19632,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0188e8bbd5..8b6c34a2c1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+	lsn = PageGetLSN(page);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 1ec0d6f6b5..940be4ac05 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3372,6 +3372,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20241031-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20241031-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From df8efb02d53f452ffb503464de77169665a609ff Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 8 Oct 2024 10:38:27 +0530
Subject: [PATCH v20241031 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  18 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  45 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 192 ++++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 492 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 711 insertions(+), 305 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index d2cac06fd7..f894af7133 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -159,6 +164,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -241,10 +256,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -260,8 +275,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -403,6 +419,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 17a6093d06..f010a40c12 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -971,6 +972,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -993,6 +1030,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index d6ffef374e..9883390190 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -747,11 +747,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -783,6 +785,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1904,12 +1908,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dd458182f0..c3e4fc3697 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -198,6 +198,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 static PartitionStrategy parsePartitionStrategy(char *strategy);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -254,6 +258,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -437,7 +442,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -573,6 +578,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10522,7 +10528,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10542,13 +10553,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10660,6 +10671,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19435,6 +19468,47 @@ parsePartitionStrategy(char *strategy)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index d8c6330732..d26222c3da 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4277,6 +4277,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4291,24 +4292,27 @@ getPublications(Archive *fout)
 	query = createPQExpBuffer();
 
 	/* Get the publications. */
+	appendPQExpBufferStr(query,
+						 "SELECT p.tableoid, p.oid, p.pubname,\n"
+						 " p.pubowner, p.puballtables, p.pubinsert,\n"
+						 " p.pubupdate, p.pubdelete,\n");
+
+	if (fout->remoteVersion >= 110000)
+		appendPQExpBufferStr(query, " p.pubtruncate,\n");
+	else
+		appendPQExpBufferStr(query, " false AS pubtruncate,\n");
+
 	if (fout->remoteVersion >= 130000)
-		appendPQExpBufferStr(query,
-							 "SELECT p.tableoid, p.oid, p.pubname, "
-							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
-							 "FROM pg_publication p");
-	else if (fout->remoteVersion >= 110000)
-		appendPQExpBufferStr(query,
-							 "SELECT p.tableoid, p.oid, p.pubname, "
-							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
-							 "FROM pg_publication p");
+		appendPQExpBufferStr(query, " p.pubviaroot,\n");
 	else
-		appendPQExpBufferStr(query,
-							 "SELECT p.tableoid, p.oid, p.pubname, "
-							 "p.pubowner, "
-							 "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
-							 "FROM pg_publication p");
+		appendPQExpBufferStr(query, " false AS pubviaroot,\n");
+
+	if (fout->remoteVersion >= 180000)
+		appendPQExpBufferStr(query, " p.puballsequences\n");
+	else
+		appendPQExpBufferStr(query, " false AS puballsequences\n");
+
+	appendPQExpBufferStr(query, "FROM pg_publication p");
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
 
@@ -4322,6 +4326,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4341,6 +4346,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4390,8 +4397,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 9f907ed5ad..7a44cb2cf5 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -621,6 +621,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index ac60829d68..3ce53bafbf 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2986,6 +2986,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 363a66e718..5868fa9c38 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1715,28 +1715,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1745,22 +1736,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1770,6 +1754,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1801,32 +1838,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2049,6 +2107,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6233,7 +6297,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6250,16 +6314,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6357,6 +6429,7 @@ describePublications(const char *pattern)
 	PGresult   *res;
 	bool		has_pubtruncate;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6373,6 +6446,7 @@ describePublications(const char *pattern)
 
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6386,6 +6460,10 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
 
@@ -6437,6 +6515,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6444,6 +6524,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6454,6 +6536,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 1be0056af7..42931b566a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3476,12 +3476,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index d9518a58b0..cb52303248 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -102,6 +108,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublicationActions pubactions;
 } Publication;
@@ -136,6 +143,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index b40b661ec8..568098948f 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4184,6 +4184,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4191,6 +4207,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 3819bf5e25..0c05718881 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6350,9 +6350,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                              List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root 
-------+-------+------------+---------+---------+---------+-----------+----------
+                                      List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index d2ed1efc3b..e02bdb9aa7 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR:  conflicting or redundant options
 LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
                                                              ^
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                              List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                      List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (2 rows)
 
 --- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                             Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                     Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -144,10 +144,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -156,10 +156,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test"
 
@@ -170,10 +170,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                               Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -195,10 +195,10 @@ Publications:
     "testpub_foralltables"
 
 \dRp+ testpub_foralltables
-                              Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f
+                                      Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -210,24 +210,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                    Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                    Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                  Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                             Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -243,10 +315,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_parted"
 
@@ -261,10 +333,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                               Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t
+                                       Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t
 Tables:
     "public.testpub_parted"
 
@@ -293,10 +365,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -309,10 +381,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -328,10 +400,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -339,10 +411,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                    Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                            Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -375,10 +447,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -388,10 +460,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f
+                                        Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -506,10 +578,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                    Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                            Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -730,10 +802,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                               Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f
+                                       Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -917,10 +989,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                              Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                      Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1125,10 +1197,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                 Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                         Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1166,10 +1238,10 @@ Publications:
     "testpub_fortbl"
 
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1247,10 +1319,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f
+                                        Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1260,20 +1332,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                           List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f
+                                                   List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                             List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f
+                                                     List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f
 (1 row)
 
 -- adding schemas and tables
@@ -1289,19 +1361,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1315,44 +1387,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                               Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                               Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                               Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                        Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1386,10 +1458,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1397,20 +1469,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                               Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1418,10 +1490,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1430,10 +1502,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1442,10 +1514,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1453,10 +1525,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1464,10 +1536,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1475,29 +1547,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1506,10 +1578,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1518,10 +1590,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                               Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1600,18 +1672,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                               Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                       Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables from schemas:
     "pub_test1"
 
@@ -1621,20 +1693,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                           Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                           Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Via root 
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f
+                                   Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 12aea71c0f..05fac5e4de 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -117,6 +117,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 171a7dd5d2..267715e7c4 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2255,6 +2255,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20241031-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20241031-0005-Documentation-for-sequence-synchronization.patchDownload
From 343bcc4949c39b6627b1f79e66064dbd022581fb Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20241031 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 964c819a02..aa873b4df6 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8113,16 +8113,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8156,7 +8159,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index d54f904956..d712b4228c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5229,8 +5229,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5253,10 +5253,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 98a7ad0c27..9ab6bf39fc 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1567,6 +1567,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1873,16 +2068,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2195,8 +2392,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2210,7 +2407,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 331315f8d3..d9807dd3e0 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 6cf7d4f9a1..212ee8c16d 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 61d28e701f..65bd5dc927 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20241031-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20241031-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 0ade8beac4ee62bc329271da514219a25ddd256c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20241031 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 181 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++----------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 223 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 89bf5ec933..394b7c5efe 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..50f1639736 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3d36249d8a..2381c5f5d9 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..fdd579b639
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,181 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+List	   *table_states_not_ready = NIL;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 118503fcb7..329787d539 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,13 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
-
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +134,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +228,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +236,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +294,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +332,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +359,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +375,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +513,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +605,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1327,7 +1241,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1568,77 +1482,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1724,7 +1567,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1742,7 +1585,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 925dff9cc4..6f0cf34eb1 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4775,7 +4775,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6504b70e4c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 267715e7c4..3efe5e239e 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2797,7 +2797,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

v20241031-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20241031-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 9d5e80ec6fa079f1cfedbe3175c0d07813d13910 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 8 Oct 2024 10:41:42 +0530
Subject: [PATCH v20241031 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  58 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 636 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  76 ++-
 src/backend/replication/logical/tablesync.c   |  41 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 +++++
 26 files changed, 1436 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index f010a40c12..1168f87c99 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1265,3 +1265,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 394b7c5efe..a87472673b 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +565,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +575,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,8 +590,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3456b821bc..88ec8735af 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8b6c34a2c1..90510776e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1043,8 +1045,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 03e97730e7..1fe09c1f30 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +794,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +833,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +894,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +918,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +947,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +972,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +989,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1007,32 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,13 +1573,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1508,7 +1629,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1898,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2131,11 +2256,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2304,6 +2433,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 54025c9f15..573c7eb26b 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -859,7 +859,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c3e4fc3697..10fa5eb28a 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10802,11 +10802,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 07bc5517fc..5988ea39bd 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index e5fdca8bbf..c7f1ff51d6 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 2381c5f5d9..a1fc571ece 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..6988b62a1b
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,636 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index fdd579b639..0bb47bfa74 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -34,14 +34,17 @@ typedef enum
 
 static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
 List	   *table_states_not_ready = NIL;
+List	   *sequence_states_not_ready = NIL;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -56,15 +59,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -79,7 +91,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -100,7 +114,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -112,17 +138,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(void)
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -135,16 +166,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -152,7 +186,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -177,5 +215,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 329787d539..f6119e5c42 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -159,7 +159,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -206,7 +206,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -332,7 +332,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -374,9 +374,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -409,6 +406,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -422,11 +427,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -463,8 +463,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1241,7 +1241,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1559,7 +1559,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1567,7 +1567,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1581,17 +1581,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6f0cf34eb1..82cc671cb8 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4626,8 +4652,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4706,6 +4732,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4725,14 +4755,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4777,6 +4810,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8a67f01200..405354e5ca 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3239,7 +3239,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 42931b566a..c86fdcb8b2 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2269,7 +2269,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 940be4ac05..7e589bb3ec 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12101,6 +12101,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 568098948f..63924b2523 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4252,7 +4252,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 6504b70e4c..65206939fa 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,28 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +342,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2b47013f11..72a96049c7 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..b4734d0368
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.34.1

#172vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#171)
5 attachment(s)
Re: Logical Replication of sequences

On Thu, 31 Oct 2024 at 14:57, vignesh C <vignesh21@gmail.com> wrote:

On Thu, 24 Oct 2024 at 04:24, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Here are comments on the 0001 and 0002 patches:

0001 patch:

read_seq_tuple() reads a buffer and acquires a lock on it, and the
buffer is returned to the caller while being locked. So I think it's
possible for the caller to get the page LSN even without changes.
Since pg_sequence_state() is the sole caller that requests lsn_ret to
be set, I think the changes of read_seq_tuples() is not necessarily
necessary.

Modified

0002 patch:
+        Assert(all_tables && *all_tables == false);
+        Assert(all_sequences && *all_sequences == false);

I think it's better to set both *all_tables and *all_sequence to false
at the beginning of the function to ensure this function works as
expected regardless of their initial values.

Modified

---
appendPQExpBufferStr(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
-                            "p.puballtables, p.pubinsert,
p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+                            "p.puballtables, false as
p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete,
p.pubtruncate, p.pubviaroot "
"FROM pg_publication p");
else if (fout->remoteVersion >= 110000)
appendPQExpBufferStr(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
-                            "p.puballtables, p.pubinsert,
p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+                            "p.puballtables, false as
p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete,
p.pubtruncate, false AS pubviaroot "
"FROM pg_publication p");
else
appendPQExpBufferStr(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
-                            "p.puballtables, p.pubinsert,
p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+                            "p.puballtables, false as
p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS
pubtruncate, false AS pubviaroot "
"FROM pg_publication p");

The column name should be puballsequences, not p.puballsequences.

Modified

---
IIUC the changes of describeOneTableDetails() includes two kinds of
changes: refactoring to use printTable() instead of printQuery(), and
adding publications that includes the sequence. Is the first
refactoring necessary for the second change? If not, should it be done
in a separate patch?

We are adding publication titles as footers to the sequence
description, each with different publication names. Since the number
of publications is unknown in advance, we will first determine the
total number of publications, then allocate the necessary size for the
footers. We will append footers[0] with either 'Owned by' or 'Sequence
for identity column,' followed by the publication titles.
Additionally, these will be subject to version checks. In this case,
using printTable instead of printQuery is preferable, as it simplifies
the code.

The attached patch has the changes for the fixes.

The patch needed to be rebased; here is the updated version.

Regards,
Vignesh

Attachments:

v20241118-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20241118-0005-Documentation-for-sequence-synchronization.patchDownload
From 90c483d8772e565f0b92f4f583c1dd501654f361 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20241118 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 59bb833f48..2fcc9972c7 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8110,16 +8110,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8153,7 +8156,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a84e60c09b..768f888068 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5233,8 +5233,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5257,10 +5257,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b7e340824c..24632918fd 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1569,6 +1569,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1875,16 +2070,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2197,8 +2394,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2212,7 +2409,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 840d7f8161..48ee7634fb 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 6cf7d4f9a1..212ee8c16d 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 61d28e701f..65bd5dc927 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

v20241118-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20241118-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 423e13e1dd764777c52869b6b0e7e4dd7612c060 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 18 Nov 2024 10:16:23 +0530
Subject: [PATCH v20241118 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  18 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 191 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 548 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 721 insertions(+), 319 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index f8e217d661..2aa40ba9d3 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -159,6 +164,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -261,10 +276,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -280,8 +295,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -423,6 +439,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 09e2dbdd10..c239ec604c 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1041,6 +1042,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1063,6 +1100,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0129db18c6..39793c1009 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -760,11 +760,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -798,6 +800,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1931,12 +1935,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 67eb96396a..40059e2930 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -258,6 +262,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -441,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -577,6 +582,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10553,7 +10559,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10573,13 +10584,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10691,6 +10702,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19489,6 +19522,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index a8c141b689..e1cbaf1195 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4279,6 +4279,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4309,9 +4310,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBufferStr(query, "false AS pubgencols ");
+		appendPQExpBufferStr(query, "false AS pubgencols, false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4327,6 +4328,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4347,6 +4349,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4398,8 +4402,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index d65f558565..50d21a451c 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -625,6 +625,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index aa1564cd45..62a6edcbd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2996,6 +2996,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 5bfebad64d..deb52428a7 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1714,28 +1714,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1744,22 +1735,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1769,6 +1753,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == 'u')
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1800,32 +1837,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == 'u')
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2048,6 +2106,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6276,7 +6340,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6293,16 +6357,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6405,6 +6477,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6422,6 +6495,7 @@ describePublications(const char *pattern)
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6438,6 +6512,9 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
 
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
@@ -6492,6 +6569,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6499,6 +6578,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6511,6 +6592,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index fad2277991..59737884fd 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3476,12 +3476,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 9a83a72d6b..48fad871f7 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -105,6 +111,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	bool		pubgencols;
 	PublicationActions pubactions;
@@ -140,6 +147,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 0f9462493e..7637f67518 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4185,6 +4185,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4192,6 +4208,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 36dc31c16c..76c38b2e0f 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6350,9 +6350,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 5de2d64d01..9d4ea3f4e4 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = 'true', publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = 'foo');
 ERROR:  publish_generated_columns requires a Boolean value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f                 | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f                 | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f                 | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f                 | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -738,10 +810,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f                 | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f                 | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -931,10 +1003,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1142,10 +1214,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1185,10 +1257,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1268,10 +1340,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1281,20 +1353,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f                 | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- adding schemas and tables
@@ -1310,19 +1382,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1336,44 +1408,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1407,10 +1479,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1418,20 +1490,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1439,10 +1511,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1451,10 +1523,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1463,10 +1535,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1474,10 +1546,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1485,10 +1557,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1496,29 +1568,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1527,10 +1599,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1621,18 +1693,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1642,20 +1714,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1775,18 +1847,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=1);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | t                 | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns=0);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1797,50 +1869,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'=false
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns=false);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'=true
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns=true);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publication_generate_columns'=false
 ALTER PUBLICATION pub2 SET (publish_generated_columns = false);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 48e68bcca2..af06ed0900 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 08521d51a9..de63d5dd74 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2257,6 +2257,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20241118-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20241118-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From a83e6a3c817ffab0eca8cc0d15a21649e82d2aa6 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20241118 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 73979f20ff..406c108845 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19632,6 +19632,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0188e8bbd5..8b6c34a2c1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+	lsn = PageGetLSN(page);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index cbbe8acd38..1410fe15d8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3375,6 +3375,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.34.1

v20241118-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20241118-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 51ebf17a56a102ef7ee9ac95abb558633a847380 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20241118 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 181 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++----------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 223 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 89bf5ec933..394b7c5efe 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..50f1639736 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3d36249d8a..2381c5f5d9 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..fdd579b639
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,181 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+List	   *table_states_not_ready = NIL;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 118503fcb7..329787d539 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,13 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
-
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +134,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +228,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +236,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +294,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +332,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +359,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +375,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +513,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +605,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1327,7 +1241,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1568,77 +1482,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1724,7 +1567,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1742,7 +1585,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 925dff9cc4..6f0cf34eb1 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4775,7 +4775,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6504b70e4c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index de63d5dd74..4c8383a2d6 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2799,7 +2799,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

v20241118-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20241118-0004-Enhance-sequence-synchronization-during-su.patchDownload
From f04656355a0a8dc7a5c766b8e5deab65fafde456 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 8 Oct 2024 10:41:42 +0530
Subject: [PATCH v20241118 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  58 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 636 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  76 ++-
 src/backend/replication/logical/tablesync.c   |  41 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 +++++
 26 files changed, 1436 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index c239ec604c..1822104b40 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1336,3 +1336,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 394b7c5efe..a87472673b 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +565,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +575,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,8 +590,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index da9a8fe99f..d10c3d8b56 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8b6c34a2c1..90510776e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1043,8 +1045,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 03e97730e7..1fe09c1f30 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +794,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +833,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +894,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +918,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +947,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +972,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +989,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1007,32 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,13 +1573,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1508,7 +1629,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1898,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2131,11 +2256,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2304,6 +2433,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 54025c9f15..573c7eb26b 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -859,7 +859,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 40059e2930..9b56c68aa3 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10833,11 +10833,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 07bc5517fc..5988ea39bd 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index e5fdca8bbf..c7f1ff51d6 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 2381c5f5d9..a1fc571ece 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..6988b62a1b
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,636 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index fdd579b639..0bb47bfa74 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -34,14 +34,17 @@ typedef enum
 
 static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
 List	   *table_states_not_ready = NIL;
+List	   *sequence_states_not_ready = NIL;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -56,15 +59,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -79,7 +91,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -100,7 +114,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -112,17 +138,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(void)
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -135,16 +166,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -152,7 +186,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -177,5 +215,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 329787d539..f6119e5c42 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -159,7 +159,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -206,7 +206,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -332,7 +332,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -374,9 +374,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -409,6 +406,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -422,11 +427,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -463,8 +463,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1241,7 +1241,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1559,7 +1559,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1567,7 +1567,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1581,17 +1581,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6f0cf34eb1..82cc671cb8 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4626,8 +4652,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4706,6 +4732,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4725,14 +4755,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4777,6 +4810,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8a67f01200..405354e5ca 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3239,7 +3239,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 59737884fd..a4fcb0aa24 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2269,7 +2269,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 1410fe15d8..14f664145b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12103,6 +12103,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 7637f67518..733d2e15b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,7 +4253,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 6504b70e4c..65206939fa 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,28 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +342,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 3014d047fe..81ecbb989e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..b4734d0368
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.34.1

#173vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#172)
5 attachment(s)
Re: Logical Replication of sequences

On Mon, 18 Nov 2024 at 10:19, vignesh C <vignesh21@gmail.com> wrote:

The patch needed to be rebased; here is the updated version.

The patch needed to be rebased; here is the updated version.

Regards,
Vignesh

Attachments:

v20241208-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20241208-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From f3fe679b164f8ea4c47cce5f00c0fde77517428a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20241208 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 8b81106fa2..579e6ceedb 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19674,6 +19674,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record has to be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0188e8bbd5..8b6c34a2c1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+	lsn = PageGetLSN(page);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9575524007..f9aeb40f02 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3375,6 +3375,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.43.0

v20241208-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20241208-0005-Documentation-for-sequence-synchronization.patchDownload
From 2206cdfd74d6906c3101d1e975e4abcf6ec40abc Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20241208 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 353 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index bf3cee08a9..532c573987 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8119,16 +8119,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8162,7 +8165,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e0c8325a39..d853470490 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5241,8 +5241,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5265,10 +5265,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 8290cd1a08..00886b4a3c 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize any newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new sequence synchronization worker will be started to synchronize the
+   sequences after executing any of the above subscriber commands, and will
+   will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker will be limited by
+   the <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify this, compare the sequences values between the publisher and
+    subscriber and execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+    if required.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1876,16 +2071,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2189,8 +2386,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2204,7 +2401,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 840d7f8161..48ee7634fb 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 6cf7d4f9a1..212ee8c16d 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a586156614..de82964f6c 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20241208-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20241208-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 0fe77fcadbe36c9e44b37fea0689e1dfb1453e57 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20241208 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 181 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++----------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 223 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 89bf5ec933..394b7c5efe 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..50f1639736 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3d36249d8a..2381c5f5d9 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..fdd579b639
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,181 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+List	   *table_states_not_ready = NIL;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 7c8a0e9cfe..64311a420d 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,13 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
-
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +134,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +228,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +236,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +294,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +332,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +359,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +375,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +513,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +605,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1327,7 +1241,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1568,77 +1482,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1724,7 +1567,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1742,7 +1585,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 46d3ad566f..376b70cfaa 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4775,7 +4775,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6504b70e4c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e62ad6b642..c03d8269a6 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2801,7 +2801,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20241208-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20241208-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 91a48c6dfe019426a60f38792a4aaa88ae557817 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Sun, 8 Dec 2024 13:37:31 +0000
Subject: [PATCH v20241208 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  45 +-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  18 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 191 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 548 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 721 insertions(+), 319 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 5e25536554..0cfeca5700 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -159,6 +164,16 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -261,10 +276,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal>,
+   <literal>FOR TABLES IN SCHEMA</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -280,8 +295,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR ALL TABLES</command>,
+   <command>FOR TABLES IN SCHEMA</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -431,6 +447,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 9bbb60463f..9fb98adb9d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1041,6 +1042,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1063,6 +1100,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 5050057a7e..74217fad11 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -786,11 +786,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -824,6 +826,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1957,12 +1961,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 67eb96396a..40059e2930 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -258,6 +262,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -441,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -577,6 +582,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10553,7 +10559,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10573,13 +10584,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10691,6 +10702,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19489,6 +19522,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ec0cdf4ed7..7940fb2652 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4285,6 +4285,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4315,9 +4316,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBufferStr(query, "false AS pubgencols ");
+		appendPQExpBufferStr(query, "false AS pubgencols, false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4333,6 +4334,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4353,6 +4355,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4404,8 +4408,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 2e55a0e3bb..a1ec7c1101 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -625,6 +625,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index aa1564cd45..62a6edcbd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2996,6 +2996,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 2657abdc72..77d9f58653 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1737,28 +1737,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1767,22 +1758,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1792,6 +1776,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1823,32 +1860,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2071,6 +2129,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6318,7 +6382,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6335,16 +6399,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6447,6 +6519,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6464,6 +6537,7 @@ describePublications(const char *pattern)
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6480,6 +6554,9 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
 
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
@@ -6534,6 +6611,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6541,6 +6620,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6553,6 +6634,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index bbd08770c3..d7bef42d74 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3476,12 +3476,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL SEQUENCES", "FOR TABLES IN SCHEMA", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "ALL TABLES", "ALL SEQUENCES", "TABLES IN SCHEMA");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("SEQUENCES", "TABLES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "SEQUENCES|TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index e2d894a2ff..12613d22e2 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -112,6 +118,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	bool		pubgencols;
 	PublicationActions pubactions;
@@ -147,6 +154,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 0f9462493e..7637f67518 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4185,6 +4185,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4192,6 +4208,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 36dc31c16c..76c38b2e0f 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6350,9 +6350,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index b44ab007de..6b9152f29f 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = 'true', publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = 'foo');
 ERROR:  publish_generated_columns requires a Boolean value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f                 | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f                 | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f                 | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f                 | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -764,10 +836,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f                 | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f                 | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -957,10 +1029,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1168,10 +1240,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1211,10 +1283,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1294,10 +1366,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1307,20 +1379,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f                 | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- adding schemas and tables
@@ -1336,19 +1408,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1362,44 +1434,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1433,10 +1505,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1444,20 +1516,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1465,10 +1537,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1477,10 +1549,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1489,10 +1561,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1500,10 +1572,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1511,10 +1583,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1522,29 +1594,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1553,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1565,10 +1637,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1647,18 +1719,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1668,20 +1740,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1801,18 +1873,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=1);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | t                 | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns=0);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1823,50 +1895,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'=false
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns=false);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'=true
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns=true);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publication_generate_columns'=false
 ALTER PUBLICATION pub2 SET (publish_generated_columns = false);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c4c21a95d0..74d12ca2d1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ce33e55bf1..e62ad6b642 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2259,6 +2259,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20241208-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20241208-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 11d0cfefdb273664400d857997fb2cf0b77294ab Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 8 Oct 2024 10:41:42 +0530
Subject: [PATCH v20241208 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  58 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 636 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  76 ++-
 src/backend/replication/logical/tablesync.c   |  41 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/034_sequences.pl      | 186 +++++
 26 files changed, 1436 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/034_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 9fb98adb9d..eaea9a4577 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1336,3 +1336,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 394b7c5efe..a87472673b 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +565,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +575,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,8 +590,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index da9a8fe99f..d10c3d8b56 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8b6c34a2c1..90510776e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1043,8 +1045,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 03e97730e7..1fe09c1f30 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +794,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +833,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +894,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +918,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +947,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +972,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +989,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1007,32 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,13 +1573,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1508,7 +1629,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1898,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2131,11 +2256,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2304,6 +2433,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index cfdf2eedf4..c9bf4b3b8b 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -882,7 +882,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 40059e2930..9b56c68aa3 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10833,11 +10833,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 07bc5517fc..5988ea39bd 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index e5fdca8bbf..c7f1ff51d6 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 2381c5f5d9..a1fc571ece 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..6988b62a1b
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,636 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence where seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index fdd579b639..0bb47bfa74 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -34,14 +34,17 @@ typedef enum
 
 static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
 List	   *table_states_not_ready = NIL;
+List	   *sequence_states_not_ready = NIL;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -56,15 +59,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -79,7 +91,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -100,7 +114,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -112,17 +138,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(void)
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -135,16 +166,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -152,7 +186,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -177,5 +215,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 64311a420d..8c2ce6af6b 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -159,7 +159,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -206,7 +206,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -332,7 +332,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -374,9 +374,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -409,6 +406,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -422,11 +427,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -463,8 +463,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1241,7 +1241,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1559,7 +1559,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1567,7 +1567,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1581,17 +1581,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 376b70cfaa..cc6bdd4f4b 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4626,8 +4652,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4706,6 +4732,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4725,14 +4755,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4777,6 +4810,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8cf1afbad2..7260ab1763 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3258,7 +3258,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index d7bef42d74..4014899224 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2269,7 +2269,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f9aeb40f02..4cc5d629f9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12108,6 +12108,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 7637f67518..733d2e15b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,7 +4253,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 6504b70e4c..65206939fa 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,28 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +342,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 3014d047fe..81ecbb989e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index c591cd7d61..870b4175e8 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -40,6 +40,7 @@ tests += {
       't/031_column_list.pl',
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
+      't/034_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/034_sequences.pl b/src/test/subscription/t/034_sequences.pl
new file mode 100644
index 0000000000..b4734d0368
--- /dev/null
+++ b/src/test/subscription/t/034_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

#174vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#173)
5 attachment(s)
Re: Logical Replication of sequences

On Sun, 8 Dec 2024 at 19:57, vignesh C <vignesh21@gmail.com> wrote:

On Mon, 18 Nov 2024 at 10:19, vignesh C <vignesh21@gmail.com> wrote:

The patch needed to be rebased; here is the updated version.

The patch needed to be rebased; here is the updated version.

Regards,
Vignesh

Attachments:

v20241211-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20241211-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 354f442cd1d7b85068dcd739f566a60a7750fb88 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20241211 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 180 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++----------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 222 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 89bf5ec933..394b7c5efe 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..50f1639736 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3d36249d8a..2381c5f5d9 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..9a39b0d7c0
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,180 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+List	   *table_states_not_ready = NIL;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 7c8a0e9cfe..64311a420d 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,13 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
-
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +134,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +228,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +236,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +294,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +332,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +359,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +375,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +513,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +605,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1327,7 +1241,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1568,77 +1482,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1724,7 +1567,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1742,7 +1585,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 46d3ad566f..376b70cfaa 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4775,7 +4775,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6504b70e4c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e62ad6b642..c03d8269a6 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2801,7 +2801,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20241211-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20241211-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 9de6397924b5957164d62221097c943f14b7ae8b Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20241211 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2c35252dc0..855383efcf 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19674,6 +19674,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0188e8bbd5..8b6c34a2c1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {false, false, false, false};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+	lsn = PageGetLSN(page);
+
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record has to be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9575524007..f9aeb40f02 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3375,6 +3375,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.43.0

v20241211-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20241211-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 99d89b1464c30987a1e43e49c849881d77d355f5 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Sun, 8 Dec 2024 13:37:31 +0000
Subject: [PATCH v20241211 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  65 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  18 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 191 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 548 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 731 insertions(+), 329 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 5e25536554..412c947d8b 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -118,16 +123,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +154,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -261,10 +276,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -280,8 +295,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR TABLES IN SCHEMA</command>,
+   <command>FOR ALL TABLES</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -431,6 +447,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 9bbb60463f..9fb98adb9d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1041,6 +1042,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1063,6 +1100,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 5050057a7e..74217fad11 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -786,11 +786,13 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a %s publication",
+						stmt->for_all_tables ? "FOR ALL TABLES" :
+						"FOR ALL SEQUENCES")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -824,6 +826,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1957,12 +1961,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a %s publication must be a superuser.",
+							 form->puballtables ? "FOR ALL TABLES" :
+							 "FOR ALL SEQUENCES")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 67eb96396a..40059e2930 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -258,6 +262,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -441,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -577,6 +582,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10553,7 +10559,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10573,13 +10584,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10691,6 +10702,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19489,6 +19522,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ec0cdf4ed7..7940fb2652 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4285,6 +4285,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4315,9 +4316,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBufferStr(query, "false AS pubgencols ");
+		appendPQExpBufferStr(query, "false AS pubgencols, false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4333,6 +4334,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4353,6 +4355,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4404,8 +4408,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 2e55a0e3bb..a1ec7c1101 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -625,6 +625,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index aa1564cd45..62a6edcbd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2996,6 +2996,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 2657abdc72..77d9f58653 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1737,28 +1737,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1767,22 +1758,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1792,6 +1776,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1823,32 +1860,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2071,6 +2129,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6318,7 +6382,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6335,16 +6399,24 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
+
 	if (pset.sversion >= 110000)
 		appendPQExpBuffer(&buf,
 						  ",\n  pubtruncate AS \"%s\"",
@@ -6447,6 +6519,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6464,6 +6537,7 @@ describePublications(const char *pattern)
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
@@ -6480,6 +6554,9 @@ describePublications(const char *pattern)
 	if (has_pubviaroot)
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
 
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
@@ -6534,6 +6611,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6541,6 +6620,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6553,6 +6634,8 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);	/* all sequences */
 		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index bbd08770c3..5884b7b2ab 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3476,12 +3476,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index e2d894a2ff..12613d22e2 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -112,6 +118,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	bool		pubgencols;
 	PublicationActions pubactions;
@@ -147,6 +154,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 0f9462493e..7637f67518 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4185,6 +4185,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4192,6 +4208,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 36dc31c16c..76c38b2e0f 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6350,9 +6350,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index c48f11f293..b5c2d44660 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = 'true', publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = 'foo');
 ERROR:  publish_generated_columns requires a Boolean value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f                 | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f                 | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f                 | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f                 | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -764,10 +836,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f                 | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f                 | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -957,10 +1029,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1168,10 +1240,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1211,10 +1283,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1294,10 +1366,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1307,20 +1379,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f                 | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- adding schemas and tables
@@ -1336,19 +1408,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1362,44 +1434,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1433,10 +1505,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1444,20 +1516,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1465,10 +1537,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1477,10 +1549,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1489,10 +1561,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1500,10 +1572,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1511,10 +1583,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1522,29 +1594,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1553,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1565,10 +1637,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1647,18 +1719,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1668,20 +1740,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1801,18 +1873,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=1);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | t                 | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns=0);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1823,50 +1895,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'=false
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns=false);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'=true
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns=true);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publication_generate_columns'=false
 ALTER PUBLICATION pub2 SET (publish_generated_columns = false);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c4c21a95d0..74d12ca2d1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ce33e55bf1..e62ad6b642 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2259,6 +2259,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20241211-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20241211-0005-Documentation-for-sequence-synchronization.patchDownload
From 3bce81e6c1e166f05d4bc8a68327bb44d00bb42c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20241211 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 222 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 352 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index bf3cee08a9..532c573987 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8119,16 +8119,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8162,7 +8165,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e0c8325a39..d853470490 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5241,8 +5241,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5265,10 +5265,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 8290cd1a08..0b05d99d81 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,200 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   to synchronize the sequences after executing any of the above subscriber
+   commands, and will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1876,16 +2070,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2189,8 +2385,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2204,7 +2400,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 840d7f8161..48ee7634fb 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 6cf7d4f9a1..212ee8c16d 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a586156614..de82964f6c 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20241211-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20241211-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 8afa2e9c8ccc4ca69d7eb5854be5d15efded205e Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Wed, 11 Dec 2024 15:17:01 +0530
Subject: [PATCH v20241211 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  58 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 636 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  76 ++-
 src/backend/replication/logical/tablesync.c   |  41 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/035_sequences.pl      | 186 +++++
 26 files changed, 1436 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/035_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 9fb98adb9d..eaea9a4577 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1336,3 +1336,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 394b7c5efe..a87472673b 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +565,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +575,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,8 +590,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index da9a8fe99f..d10c3d8b56 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8b6c34a2c1..90510776e9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1043,8 +1045,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 03e97730e7..1fe09c1f30 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +794,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +833,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +894,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +918,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +947,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +972,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +989,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1007,32 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,13 +1573,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1508,7 +1629,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1898,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2131,11 +2256,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2304,6 +2433,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 68deea50f6..f7ee2bbaf5 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -871,7 +871,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 40059e2930..9b56c68aa3 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10833,11 +10833,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 07bc5517fc..5988ea39bd 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index e5fdca8bbf..c7f1ff51d6 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 2381c5f5d9..a1fc571ece 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..6ecd1f6872
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,636 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 9a39b0d7c0..44018b8741 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -34,14 +34,17 @@ typedef enum
 
 static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
 List	   *table_states_not_ready = NIL;
+List	   *sequence_states_not_ready = NIL;
 
 /*
  * Exit routine for synchronization worker.
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -56,15 +59,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -79,7 +91,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -99,7 +113,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -111,17 +137,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes to
+ * either sequences or tables can affect the validity of relation states, so we
+ * update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(void)
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -134,16 +165,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -151,7 +185,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -176,5 +214,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 64311a420d..8c2ce6af6b 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -159,7 +159,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -206,7 +206,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -332,7 +332,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -374,9 +374,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -409,6 +406,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -422,11 +427,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -463,8 +463,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1241,7 +1241,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1559,7 +1559,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1567,7 +1567,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1581,17 +1581,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 376b70cfaa..cc6bdd4f4b 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4626,8 +4652,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4706,6 +4732,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4725,14 +4755,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4777,6 +4810,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8cf1afbad2..7260ab1763 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3258,7 +3258,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 5884b7b2ab..1fb0642dbe 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2269,7 +2269,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f9aeb40f02..4cc5d629f9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12108,6 +12108,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 7637f67518..733d2e15b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,7 +4253,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 6504b70e4c..65206939fa 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,28 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +342,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 3014d047fe..81ecbb989e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index b2395e7b57..993034bb2f 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -41,6 +41,7 @@ tests += {
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
+      't/035_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/035_sequences.pl b/src/test/subscription/t/035_sequences.pl
new file mode 100644
index 0000000000..b4734d0368
--- /dev/null
+++ b/src/test/subscription/t/035_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

#175Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#174)
1 attachment(s)
Re: Logical Replication of sequences

Hi Vignesh.

Here are some review comments for patch v20241211-0001.

======
src/backend/commands/sequence.c

pg_sequence_state:

1.
+ TupleDesc tupdesc;
+ HeapTuple tuple;
+ Datum values[4];
+ bool nulls[4] = {false, false, false, false};

SUGGESTION
bool nulls[4] = {0};

The above achieves the same thing, but more succinctly, and I think it
is a common enough pattern in the PG source code.

~~~

2.
+ seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+ page = BufferGetPage(buf);
+ lsn = PageGetLSN(page);
+
+ last_value = seq->last_value;
+ log_cnt = seq->log_cnt;
+ is_called = seq->is_called;

Move the blank line, so the 'lsn' assignment will be grouped with the
other 3 assignments which are part of the return tuple.

~~~

3.
+ /* How many fetches remain before a new WAL record has to be written */
+ values[2] = Int64GetDatum(log_cnt);

Trivial change to use the same wording as the documentation.

/has to/must/

======

(The attached NITPICKS patch includes the above suggestions)

======
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

PS_NITPICKS_20241218_SEQ_0001.txttext/plain; charset=US-ASCII; name=PS_NITPICKS_20241218_SEQ_0001.txtDownload
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 8b6c34a..aff2c1a 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -1912,7 +1912,7 @@ pg_sequence_state(PG_FUNCTION_ARGS)
 	TupleDesc	tupdesc;
 	HeapTuple	tuple;
 	Datum		values[4];
-	bool		nulls[4] = {false, false, false, false};
+	bool		nulls[4] = {0};
 
 	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
 		elog(ERROR, "return type must be a row type");
@@ -1929,8 +1929,8 @@ pg_sequence_state(PG_FUNCTION_ARGS)
 
 	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
 	page = BufferGetPage(buf);
-	lsn = PageGetLSN(page);
 
+	lsn = PageGetLSN(page);
 	last_value = seq->last_value;
 	log_cnt = seq->log_cnt;
 	is_called = seq->is_called;
@@ -1944,7 +1944,7 @@ pg_sequence_state(PG_FUNCTION_ARGS)
 	/* The value most recently returned by nextval in the current session */
 	values[1] = Int64GetDatum(last_value);
 
-	/* How many fetches remain before a new WAL record has to be written */
+	/* How many fetches remain before a new WAL record must be written */
 	values[2] = Int64GetDatum(log_cnt);
 
 	/* Indicates whether the sequence has been used */
#176Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#174)
Re: Logical Replication of sequences

Hi Vignesh,

Here are some review comments for the patch v20241211-0002.

======
doc/src/sgml/ref/create_publication.sgml

1.
+<phrase>where <replaceable class="parameter">object
type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES

The replaceable "object_type" is missing an underscore.

~~~

publish option effect fro SEQUENCE replication?

2.
It's not obvious to me if the SEQUENCE replication stuff is affected
but the setting of pubactions (ie the 'publish' option). I'm thinking
that probably anything to do with SEQUENCEs may no be considered a DML
operation, but if that is true it might be better to explicitly say
so.

Also, we might need to include a test to show even if publish='' that
the SEQUENCE synchronization is not affected.

======
src/backend/commands/publicationcmds.c

CreatePublication:

3.
- /* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ /* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+ if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
  ereport(ERROR,
  (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- errmsg("must be superuser to create FOR ALL TABLES publication")));
+ errmsg("must be superuser to create a %s publication",
+ stmt->for_all_tables ? "FOR ALL TABLES" :
+ "FOR ALL SEQUENCES")));

It seems a quirk that if FOR ALL TABLES and FOR ALL SEQUENCES are
specified at the same time then you would only get the "FOR ALL
TABLES" error, but maybe that is OK?

~~~

AlterPublicationOwner_internal:

4.
Ditto quirk as commented above, but maybe it is OK.

======
src/bin/psql/describe.c

describePublications:

5.
It seems the ordering of the local variables, and then the attributes
in the SQL, and the headings in the "describe" output are a bit
muddled.

IMO it might be better to always keep things in the same order as the
eventual display headings. So anything to do with puballsequences
should be immediately after anything to do with puballtables.

(There are multiple changes needed in this function to rearrange
things to be this way).

~~~

6.
The following seems wrong because now somehow there are two lots of
index 9 (???)

-----
printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
if (has_pubsequence)
printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false); /*
all sequences */
printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
if (has_pubtruncate)
printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
if (has_pubgencols)
printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
if (has_pubviaroot)
printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
-----

======
src/test/regress/expected/psql.out

7.
+\dRp+ regress_pub_forallsequences1
+                                            Publication
regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts |
Updates | Deletes | Truncates | Generated columns | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t
     | t       | t         | f                 | f
+(1 row)
+

The expected value of 'f' for "All sequences" looks wrong to me. I
think this might be a manifestation of that duplicated '9' index
mentioned in an earlier review comment #6.

~~~

8.
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication
regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts |
Updates | Deletes | Truncates | Generated columns | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t
     | t       | t         | f                 | f
+(1 row)
+

The expected value of 'f' for "All sequences" looks wrong to me. I
think this might be a manifestation of that duplicated '9' index
mentioned in an earlier review comment #6.

~~~

9.
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes
| Truncates | Generated columns | Via root
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t
| t         | f                 | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts |
Updates | Deletes | Truncates | Generated columns | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t
     | t       | t         | f                 | t

AFAIK these partition tests should not be impacting the "All
Sequences" flag value, so the expected value of 't' for "All
sequences" looks wrong to me. I think this might be a manifestation of
that duplicated '9' index mentioned in an earlier review comment #6.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#177Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#174)
Re: Logical Replication of sequences

Hi Vignesh,

Here are some review comments for the patch v20241211-0003.

======
src/backend/replication/logical/syncutils.c

1.
+typedef enum
+{
+ SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+ SYNC_RELATIONS_STATE_REBUILD_STARTED,
+ SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+

Even though the patch's intent was only to "move" this from
tablsync.c, this enum probably deserved a comment describing its
purpose.

~~~

2.
+List *table_states_not_ready = NIL;

Maybe it is convenient to move this, but there is something about this
list being exposed out of a "utils" module back to the tablesync.c
module that seems a bit strange. Would it make more sense for this to
be the other way around e.g. declared in the tablesync.c but exposed
to the syncutils.c? (and then similarly in subsequent patch 0004 the
sequence_states_not_ready would belong in the sequencesync.c)

~~~

3.
+/*
+ * Process possible state change(s) of tables that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{

IIUC there was a deliberate effort to rename some comments and
functions to say "relations" instead of "tables". AFAICT that was done
to encompass the SEQUENCES which can fit under the umbrella of
"relations". Anyway, it becomes confusing sometimes when there is a
mismatch. For example, here is a function (relations) with a function
comment (tables).

~~~

4.
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)

Here is another place where the function name is "relations", but the
function comment refers to "tables".

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#178Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#174)
Re: Logical Replication of sequences

Hi Vignesh,

Here are some review comments for the patch v20241211-0004.

======
GENERAL

1.
There are more than a dozen places where the relation (relkind) is
checked to see if it is a SEQUENCE:

e.g. + get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
e.g. + if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
e.g. + if (relkind == RELKIND_SEQUENCE && !get_sequences)
e.g. + if (relkind != RELKIND_SEQUENCE && !get_tables)
e.g. + relkind == RELKIND_SEQUENCE ? "sequence" : "table",
e.g. + if (relkind != RELKIND_SEQUENCE)
e.g. + relkind == RELKIND_SEQUENCE ? "sequence" : "table",
e.g. + if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
e.g. + if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
e.g. + relkind != RELKIND_SEQUENCE)
e.g. + Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
e.g. + Assert(relkind == RELKIND_SEQUENCE);
e.g. + if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
e.g. + Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);

I am wondering if the code might be improved slightly by adding one new macro:

#define RELKIND_IS_SEQUENCE(relkind) ((relkind) == RELKIND_SEQUENCE)

======
Commit message

2.
1) CREATE SUBSCRIPTION
- (PG17 command syntax is unchanged)
- The subscriber retrieves sequences associated with publications.
- Publisher sequences are added to pg_subscription_rel with INIT state.
- Initiates the sequencesync worker (see above) to synchronize all
sequences.

~

Shouldn't this say "Published sequences" instead of "Publisher sequences"?

I guess if the patch currently supports only ALL SEQUENCES then maybe
it amounts to the same thing, but IMO "Published" seems more correct.

~~~

3.
2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
- (PG17 command syntax is unchanged)
- Dropped publisher sequences are removed from pg_subscription_rel.
- New publisher sequences are added to pg_subscription_rel with INIT state.
- Initiates the sequencesync worker (see above) to synchronize only
newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
- The patch introduces this new command to refresh all sequences
- Dropped publisher sequences are removed from pg_subscription_rel.
- New publisher sequences are added to pg_subscription_rel
- All sequences in pg_subscription_rel are reset to INIT state.
- Initiates the sequencesync worker (see above) to synchronize all
sequences.

~

Ditto previous comment -- maybe those should say "Newly published
sequences" instead of "New publisher sequences"/

~~~

4.
Should there be some mention of the WARNING logged if sequence
parameter differences are detected?

======
src/backend/catalog/pg_subscription.c

GetSubscriptionRelations:

5.
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.

and

- if (not_ready)
+ if (!all_states)
ScanKeyInit(&skey[nkeys++],
Anum_pg_subscription_rel_srsubstate,
BTEqualStrategyNumber, F_CHARNE,
CharGetDatum(SUBREL_STATE_READY));
~

It was a bit confusing that the code for (!all_states) is excluding
everything that is not in SUBREL_STATE_READY, but OTOH the function
comment for sequences it said "that are in INIT state". Maybe that
function comment for sequence also should have said "that have not
reached READY state (i.e. are still in INIT state)" to better match
the code.

~~~

6.
+ /* Skip sequences if they were not requested */
+ if (relkind == RELKIND_SEQUENCE && !get_sequences)
+ continue;
+
+ /* Skip tables if they were not requested */
+ if (relkind != RELKIND_SEQUENCE && !get_tables)
+ continue;

Somehow, I feel this logic would seem simpler if expressed differently
to make it more explicit that the relation is either a table or a
sequence. e.g. by adding some variables.

SUGGESTION:

bool is_sequence;
bool is_table;

...

/* Relation is either a sequence or a table */
is_sequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
is_table = !is_sequence;

/* Skip sequences if they were not requested */
if (!get_sequences && is_sequence)
continue;

/* Skip tables if they were not requested */
if (!get_tables && is_table)
continue;

======
src/backend/commands/sequence.c

7.
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)

~

Not sure this function comment is right. Shouldn't it still say
"Implement the 2 arg setval procedure."

~~~

8.
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)

~

Not sure this function comment is right. Shouldn't it still say
"Implement the 3 arg setval procedure."

======
src/backend/commands/subscriptioncmds.c

AlterSubscription_refresh:

9.
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding
or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.

The first para says "refresh publication". The second para says
"publication refresh", but I guess it should also be saying "refresh
publication".

~~~

10.
+ if (resync_all_sequences)
+ {
+
+ UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+    InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name));
+ }

Has unnecessary blank line.

~~~

AlterSubscription:

11.
Some of the generic error messages in this function are now
potentially misleading.

e.g. There are multiple places in this function that say "ALTER
SUBSCRIPTION ... REFRESH", meaning ALTER SUBSCRIPTION <subname>
REFRESH PUBLICATION, but not meaning ALTER SUBSCRIPTION <subname>
REFRESH PUBLICATION SEQUENCES, so possibly those need to be modified
to eliminate any ambiguity.

(Actually, maybe it is not only in this function -- the short form
"ALTER SUBSCRIPTION ... REFRESH" seems to be scattered in other
comments in this file also).

~~~

12.
- logicalrep_worker_stop(w->subid, w->relid);
+ /* Worker might have exited because of an error */
+ if (w->type == WORKERTYPE_UNKNOWN)
+ continue;
+
+ logicalrep_worker_stop(w->subid, w->relid, w->type);

It may be better to put that special case WORKERTYPE_UNKNOWN condition
as a quick exit within the logicalrep_worker_stop() function.

======
src/backend/replication/logical/launcher.c

logicalrep_worker_find:

13.
+ Assert(wtype == WORKERTYPE_TABLESYNC ||
+    wtype == WORKERTYPE_SEQUENCESYNC ||
+    wtype == WORKERTYPE_APPLY);
+

The master version of this function checked for
isParallelApplyWorker(w), but now if a WORKERTYPE_PARALLEL_APPLY ever
got to here it would fail the above Assert. So, is the patch code OK,
or do we still need to also account for a possible
WORKERTYPE_PARALLEL_APPLY reaching here?

~~~

logicalrep_worker_stop:

14.
worker = logicalrep_worker_find(subid, relid, wtype, false);

if (worker)
{
Assert(!isParallelApplyWorker(worker));
logicalrep_worker_stop_internal(worker, SIGTERM);
}

~

This code is not changed much from before, so it first finds the
worker, but then asserts that it must not be not a parallel apply
worker. But now, since the wtype is known and passed to the function
why not move the Assert up-front based on the wtype and before even
doing the 'find'?

======
.../replication/logical/sequencesync.c

ProcessSyncingSequencesForApply:

15.
+ /*
+ * Check if there is a sequence worker already running?
+ */
+ LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+ syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+ InvalidOid, WORKERTYPE_SEQUENCESYNC,
+ true);
+ if (syncworker)
+ {
+ /* Now safe to release the LWLock */
+ LWLockRelease(LogicalRepWorkerLock);
+ break;
+ }

15a.
Comment: /sequence worker/sequencesync worker/

~

15b.
Maybe it's better to call this variable 'sequencesync_worker' or
similar because sync worker is too generic

~~~

fetch_remote_sequence_data:

16.
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.

The SELECT of this function fetch a lot more columns, so why are only
these few mentioned in the function comment?

~

17.
+ res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ errmsg("could not receive sequence list from the publisher: %s",
+    res->err));

That error msg does seem quite right. IIUC, this is just the data from
a single sequence; it is not a "sequence list".

~~~

report_mismatched_sequences:

18.
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+ if (mismatched_seqs->len)
+ {
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("parameters differ for the remote and local sequences (%s)
for subscription \"%s\"",
+    mismatched_seqs->data, MySubscription->name),
+ errhint("Alter/Re-create local sequences to have the same parameters
as the remote sequences."));
+
+ resetStringInfo(mismatched_seqs);
+ }
+}

I'm confused. The errhint says "Alter/Re-create local sequences to
have the same parameters", but in the function 'copy_sequence' there
was the code (below) that seems to be already setting the sequence
values (SetSequence) regardless of whether it detected
sequence_mismatch true/false.

+ *sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+ nspname, relname,
+ &seq_log_cnt, &seq_is_called,
+ &seq_page_lsn, &seq_last_value);
+
+ SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+ seq_log_cnt);

Is the setting regardless like that OK, or just going to lead to weird
integrity errors?

~~~

LogicalRepSyncSequences:

19.
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)

/sync worker/sequencesync worker/

~~~

20.
+ curr_seq++;
+
+ /*
+ * Have we reached the end of the current batch of sequences, or last
+ * remaining sequences to synchronize?
+ */
+ if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+ curr_seq == seq_count)
+ {
+ /* LOG all the sequences synchronized during current batch. */
+ for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+ i < curr_seq; i++)
+ {

The calculation is quite tricky.

IMO this might all be simplified if you have another variable like
'batch_seq' that just ranges from 0 -- MAX_SEQUENCES_SYNC_PER_BATCH,
then do something like:

SUGGESTION:

cur_seq++;
batch_seq++;

if (batch_seq >= MAX_SEQUENCES_SYNC_PER_BATCH || cur_seq == seq_count)
{
/* Process the batch. */

...

/* LOG all the sequences synchronized during the current batch. */
for (int i = 0; i < batch_seq; i++)
{
...
}

/* Prepare for next batch */
batch_seq = 0;
}

======
src/backend/replication/logical/syncutils.c

21.
List *table_states_not_ready = NIL;
+List *sequence_states_not_ready = NIL;

I thought this declaration belonged more in the sequencesync.c file.

~~~

SyncProcessRelations:

22.
 /*
- * Process possible state change(s) of tables that are being synchronized.
+ * Process possible state change(s) of tables that are being synchronized and
+ * start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */

/added tables and start new sequencesync worker/added tables. Also,
start a new sequencesync worker/

~~~

FetchRelationStates:

23.
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(void)

Partly because of the name (relations), I felt this might be better to
be a void function and the returned value would be passed back by
references (bool *has_tables).

======
src/backend/replication/logical/worker.c

SetupApplyOrSyncWorker:

24.
+
+ if (am_sequencesync_worker())
+ before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);

There should be a comment saying what this callback is for.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#179Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#174)
Re: Logical Replication of sequences

Hi Vignesh.

Here are some review comments for the patch v20241211-0005.

======
doc/src/sgml/logical-replication.sgml

Section "29.6.1. Sequence Definition Mismatches"

1.
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>

Maybe this should say *when* this happens.

SUGGESTION
During sequence synchronization, the sequence definitions of the
publisher and the subscriber are compared. A WARNING is logged if any
differences are detected.

~~~

Section "29.6.3. Examples"

2.
Should the Examples section also have an example of ALTER SUBSCRIPTION
... REFRESH PUBLICATION to demonstrate (like in the TAP tests) that if
the sequences are already known, then those are not synchronised?

~~~

Section "29.8. Restrictions"

3.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either
by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.

I don't know if you need to mention it, or maybe it is too obvious,
but the suggestion here to use "ALTER SUBSCRIPTION ... REFRESH
PUBLICATION SEQUENCES" assumed you've already arranged for the
PUBLICATION to be publishing sequences before this.

======
doc/src/sgml/ref/alter_subscription.sgml

4.
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and
synchronize
+          sequences in the publications that are being subscribed to
when the replication
+          starts. The default is <literal>true</literal>.
          </para>

This is talking also about "synchronize sequences" when the
replication starts, but it is a bit confusing. IIUC, the .. REFRESH
PUBLICATION only synchronizes *newly added* sequences anyway, so does
it mean even that will not happen if copy_data=false?

I think this option needs more clarification on how it interacts with
sequences. Also, I don't recall seeing any test for sequences and
copy_data in the patch 0004 TAP tests, so maybe something needs to be
added there too.

~~~

5.
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>

/on how to identify sequences and handle out-of-sync sequences./on how
to identify and handle out-of-sync sequences./

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#180vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#176)
5 attachment(s)
Re: Logical Replication of sequences

On Wed, 18 Dec 2024 at 11:40, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Here are some review comments for the patch v20241211-0002.
~~~

publish option effect fro SEQUENCE replication?

2.
It's not obvious to me if the SEQUENCE replication stuff is affected
but the setting of pubactions (ie the 'publish' option). I'm thinking
that probably anything to do with SEQUENCEs may no be considered a DML
operation, but if that is true it might be better to explicitly say
so.

Also, we might need to include a test to show even if publish='' that
the SEQUENCE synchronization is not affected.

Since we just synchronize the data and we don't replicate incremental
changes and insert/update/delete are only for incremental data sync, I
felt we need not mention this explicitly in this case as we have
mentioned that "Incremental sequence changes are not replicated" in
0005 patch. I also felt that there was no need to add a testcase for
this as currently it has nothing to do with this option.

======
src/backend/commands/publicationcmds.c

CreatePublication:

3.
- /* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ /* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+ if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- errmsg("must be superuser to create FOR ALL TABLES publication")));
+ errmsg("must be superuser to create a %s publication",
+ stmt->for_all_tables ? "FOR ALL TABLES" :
+ "FOR ALL SEQUENCES")));

It seems a quirk that if FOR ALL TABLES and FOR ALL SEQUENCES are
specified at the same time then you would only get the "FOR ALL
TABLES" error, but maybe that is OK?

I have modified the error message slightly

~~~

AlterPublicationOwner_internal:

4.
Ditto quirk as commented above, but maybe it is OK.

I have modified the error message slightly

The rest of the comments are fixed. Also the comments from [1]/messages/by-id/CAHut+Psk1V2eUL_nreWp1trO1iSDqhDtBnfu65PrsoorpuNzKA@mail.gmail.com are fixed.
The attached v202412123 version patch has the changes for the same.

[1]: /messages/by-id/CAHut+Psk1V2eUL_nreWp1trO1iSDqhDtBnfu65PrsoorpuNzKA@mail.gmail.com

Regards,
Vignesh

Attachments:

v202412123-0003-Reorganize-tablesync-Code-and-Introduce-s.patchtext/x-patch; charset=US-ASCII; name=v202412123-0003-Reorganize-tablesync-Code-and-Introduce-s.patchDownload
From 87d2b8e616f627433d2a6ddba2d8af84e1a7e3ae Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v202412123 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 191 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 234 insertions(+), 189 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 89bf5ec933..394b7c5efe 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..50f1639736 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3d36249d8a..2381c5f5d9 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..cd7e26fad2
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,191 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum representing the overall state of subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD The subscription relations state is no
+ * longer valid and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 7c8a0e9cfe..9917fb8b25 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +136,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1327,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1568,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1724,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1742,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 9e50c880f8..0fe6512b4e 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4778,7 +4778,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6504b70e4c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a2ec350326..0b9e9d0c08 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2806,7 +2806,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v202412123-0002-Introduce-ALL-SEQUENCES-support-for-Postg.patchtext/x-patch; charset=US-ASCII; name=v202412123-0002-Introduce-ALL-SEQUENCES-support-for-Postg.patchDownload
From 075e1fba2d1ec85c6c60fb270b764c3f2d083313 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Sun, 8 Dec 2024 13:37:31 +0000
Subject: [PATCH v202412123 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  65 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  14 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 252 +++++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 548 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 776 insertions(+), 341 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 5e25536554..9a19db863c 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object_type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -118,16 +123,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +154,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -261,10 +276,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -280,8 +295,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR TABLES IN SCHEMA</command>,
+   <command>FOR ALL TABLES</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -431,6 +447,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 9bbb60463f..9fb98adb9d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1041,6 +1042,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1063,6 +1100,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 5050057a7e..a201587475 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -786,11 +786,11 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create ALL TABLES and/or ALL SEQUENCES publication")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -824,6 +824,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1957,12 +1959,14 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of ALL TABLES and/or ALL SEQUENCES publication must be a superuser.")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 67eb96396a..40059e2930 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -258,6 +262,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -441,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -577,6 +582,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10553,7 +10559,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10573,13 +10584,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10691,6 +10702,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19489,6 +19522,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 19969e400f..a4540eafd0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4285,6 +4285,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4315,9 +4316,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBufferStr(query, "false AS pubgencols ");
+		appendPQExpBufferStr(query, "false AS pubgencols, false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4333,6 +4334,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4353,6 +4355,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4404,8 +4408,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 9c5ddd20cf..da7a0c1030 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -633,6 +633,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index aa1564cd45..62a6edcbd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2996,6 +2996,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 2657abdc72..f8bbcffd85 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1737,28 +1737,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1767,22 +1758,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1792,6 +1776,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1823,32 +1860,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
 
-		free(footers[0]);
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2071,6 +2129,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6318,7 +6382,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6335,13 +6399,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6447,6 +6518,19 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
+	int			puboid_col = -1,	/* column indexes in "res" */
+				pubname_col = -1,
+				pubowner_col = -1,
+				puballtables_col = -1,
+				puballsequences_col = -1,
+				pubins_col = -1,
+				pubupd_col = -1,
+				pubdel_col = -1,
+				pubtrunc_col = -1,
+				pubgen_col = -1,
+				pubviaroot_col = -1;
+	int			cols = 0;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6464,22 +6548,52 @@ describePublications(const char *pattern)
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+	puboid_col = cols++;
+	pubname_col = cols++;
+	pubowner_col = cols++;
+	puballtables_col = cols++;
+
+	if (has_pubsequence)
+	{
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+		puballsequences_col = cols++;
+	}
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+	pubins_col = cols++;
+	pubupd_col = cols++;
+	pubdel_col = cols++;
+
 	if (has_pubtruncate)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
+		pubtrunc_col = cols++;
+	}
+
 	if (has_pubgencols)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubgencols");
+		pubgen_col = cols++;
+	}
+
 	if (has_pubviaroot)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+		pubviaroot_col = cols++;
+	}
 
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
@@ -6523,9 +6637,9 @@ describePublications(const char *pattern)
 		const char	align = 'l';
 		int			ncols = 5;
 		int			nrows = 1;
-		char	   *pubid = PQgetvalue(res, i, 0);
-		char	   *pubname = PQgetvalue(res, i, 1);
-		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+		char	   *pubid = PQgetvalue(res, i, puboid_col);
+		char	   *pubname = PQgetvalue(res, i, pubname_col);
+		bool		puballtables = strcmp(PQgetvalue(res, i, puballtables_col), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
 		if (has_pubtruncate)
@@ -6534,6 +6648,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6541,6 +6657,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6551,17 +6669,19 @@ describePublications(const char *pattern)
 		if (has_pubviaroot)
 			printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
 
-		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubowner_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, puballtables_col), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, puballsequences_col), false, false);	/* all sequences */
+		printTableAddCell(&cont, PQgetvalue(res, i, pubins_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubupd_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubdel_col), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubtrunc_col), false, false);
 		if (has_pubgencols)
-			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubgen_col), false, false);
 		if (has_pubviaroot)
-			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubviaroot_col), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 31c77214b4..4936f5bd68 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3491,12 +3491,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index e2d894a2ff..12613d22e2 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -112,6 +118,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	bool		pubgencols;
 	PublicationActions pubactions;
@@ -147,6 +154,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 0f9462493e..7637f67518 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4185,6 +4185,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4192,6 +4208,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 36dc31c16c..76c38b2e0f 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6350,9 +6350,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index c48f11f293..96a51bf687 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = 'true', publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = 'foo');
 ERROR:  publish_generated_columns requires a Boolean value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f                 | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f                 | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f                 | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -764,10 +836,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f                 | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f                 | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -957,10 +1029,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1168,10 +1240,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1211,10 +1283,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1294,10 +1366,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1307,20 +1379,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f                 | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- adding schemas and tables
@@ -1336,19 +1408,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1362,44 +1434,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1433,10 +1505,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1444,20 +1516,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1465,10 +1537,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1477,10 +1549,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1489,10 +1561,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1500,10 +1572,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1511,10 +1583,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1522,29 +1594,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1553,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1565,10 +1637,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1647,18 +1719,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1668,20 +1740,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1801,18 +1873,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=1);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | t                 | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns=0);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1823,50 +1895,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'=false
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns=false);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'=true
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns=true);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publication_generate_columns'=false
 ALTER PUBLICATION pub2 SET (publish_generated_columns = false);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c4c21a95d0..74d12ca2d1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index fbdb932e6b..a2ec350326 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2262,6 +2262,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v202412123-0001-Introduce-pg_sequence_state-function-for-.patchtext/x-patch; charset=US-ASCII; name=v202412123-0001-Introduce-pg_sequence_state-function-for-.patchDownload
From cbc2ed07ade639b2965ed876b43789180e9127a8 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v202412123 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 47370e581a..9bb896e9a5 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19690,6 +19690,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0188e8bbd5..aff2c1a6c0 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 2dcc2d42da..8598599686 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3375,6 +3375,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.43.0

v202412123-0005-Documentation-for-sequence-synchronizatio.patchtext/x-patch; charset=US-ASCII; name=v202412123-0005-Documentation-for-sequence-synchronizatio.patchDownload
From 4d8834abd4aa73ecaf17ad8ad2caed70336929af Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v202412123 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  10 +-
 doc/src/sgml/logical-replication.sgml     | 222 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 352 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index bf3cee08a9..532c573987 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8119,16 +8119,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8162,7 +8165,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fbdd6ce574..18a1fff71c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5241,8 +5241,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5265,10 +5265,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 8290cd1a08..0b05d99d81 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,200 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   to synchronize the sequences after executing any of the above subscriber
+   commands, and will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     If there are differences in sequence definitions between the publisher and
+     subscriber, a WARNING is logged.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1876,16 +2070,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2189,8 +2385,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2204,7 +2400,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d0d176cc54..3d841658cf 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..457a614ea6 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how
+      to identify sequences and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 6cf7d4f9a1..212ee8c16d 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a586156614..de82964f6c 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v202412123-0004-Enhance-sequence-synchronization-during-s.patchtext/x-patch; charset=US-ASCII; name=v202412123-0004-Enhance-sequence-synchronization-during-s.patchDownload
From 3b57638f4c19d3589a7c14fa2e68fc4b4f29c8a6 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 23 Dec 2024 12:22:05 +0530
Subject: [PATCH v202412123 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Sets the local sequence values accordingly.
    c) Updates the local sequence state to READY.
    d) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped publisher sequences are removed from pg_subscription_rel.
    - New publisher sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  58 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  31 +-
 src/backend/commands/subscriptioncmds.c       | 304 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  70 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 638 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  41 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/035_sequences.pl      | 186 +++++
 26 files changed, 1437 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/035_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 9fb98adb9d..eaea9a4577 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1336,3 +1336,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 394b7c5efe..a87472673b 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that are in INIT state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +565,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +575,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,8 +590,18 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		/* Skip sequences if they were not requested */
+		if (relkind == RELKIND_SEQUENCE && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (relkind != RELKIND_SEQUENCE && !get_tables)
+			continue;
 
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index da9a8fe99f..d10c3d8b56 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index aff2c1a6c0..ac640c2873 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1043,8 +1045,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 }
 
 /*
- * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 2 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
- * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * Implement the 3 arg set sequence procedure.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 03e97730e7..1fe09c1f30 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +794,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +833,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or publication refresh.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +894,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +918,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +947,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +972,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +989,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1007,32 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,13 +1573,28 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
@@ -1508,7 +1629,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,11 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		/* Worker might have exited because of an error */
+		if (w->type == WORKERTYPE_UNKNOWN)
+			continue;
+
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1898,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2131,11 +2256,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2304,6 +2433,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 68deea50f6..f7ee2bbaf5 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -871,7 +871,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 40059e2930..9b56c68aa3 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10833,11 +10833,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 07bc5517fc..5988ea39bd 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index e5fdca8bbf..c7f1ff51d6 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,29 +235,28 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	Assert(wtype == WORKERTYPE_TABLESYNC ||
+		   wtype == WORKERTYPE_SEQUENCESYNC ||
+		   wtype == WORKERTYPE_APPLY);
+
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		/* Skip parallel apply workers. */
-		if (isParallelApplyWorker(w))
-			continue;
-
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 2381c5f5d9..a1fc571ece 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..37fd9cafb2
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,638 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List      *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *syncworker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequence worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											InvalidOid, WORKERTYPE_SEQUENCESYNC,
+											true);
+		if (syncworker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive sequence list from the publisher: %s",
+					   res->err));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel,
+			  bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+				seq_log_cnt);
+
+	/* return the LSN when the sequence state was set */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	bool		start_txn = true;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+		UpdateSubscriptionRelState(subid, seqinfo->relid, SUBREL_STATE_READY,
+								   sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (((curr_seq % MAX_SEQUENCES_SYNC_PER_BATCH) == 0) ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = (curr_seq - 1) - ((curr_seq - 1) % MAX_SEQUENCES_SYNC_PER_BATCH);
+				 i < curr_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced, i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+		}
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index cd7e26fad2..5e73450d13 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -51,8 +51,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -67,15 +69,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -90,7 +101,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables and start new
+ * sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -110,7 +123,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -122,17 +147,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -145,16 +175,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -162,7 +195,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +224,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9917fb8b25..57001155b7 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,17 +1583,10 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_subrels = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 0fe6512b4e..b7b3f72243 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4629,8 +4655,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4709,6 +4735,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4728,14 +4758,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4780,6 +4813,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8cf1afbad2..7260ab1763 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3258,7 +3258,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 4936f5bd68..13ab2d5c69 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2269,7 +2269,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8598599686..5524226dca 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12130,6 +12130,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 7637f67518..733d2e15b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,7 +4253,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 6504b70e4c..65206939fa 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,28 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +342,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 3014d047fe..81ecbb989e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index b2395e7b57..993034bb2f 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -41,6 +41,7 @@ tests += {
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
+      't/035_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/035_sequences.pl b/src/test/subscription/t/035_sequences.pl
new file mode 100644
index 0000000000..b4734d0368
--- /dev/null
+++ b/src/test/subscription/t/035_sequences.pl
@@ -0,0 +1,186 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s4"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

#181vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#177)
Re: Logical Replication of sequences

On Thu, 19 Dec 2024 at 04:58, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Here are some review comments for the patch v20241211-0003.

~~~

4.
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)

Here is another place where the function name is "relations", but the
function comment refers to "tables".

In this place the use of tables in comment is intentional, as the
return is based on subscription having any tables, and is not
applicable for sequence.

The rest of the comments are fixed and the changes for the same are
available at the v202412123 version patch shared at [1]/messages/by-id/CALDaNm0FqKMqOdm7tNoT5KgK1BAMeeVnOXrSJ2024TscAbf4Og@mail.gmail.com.
[1]: /messages/by-id/CALDaNm0FqKMqOdm7tNoT5KgK1BAMeeVnOXrSJ2024TscAbf4Og@mail.gmail.com

Regards,
Vignesh

#182vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#178)
5 attachment(s)
Re: Logical Replication of sequences

On Fri, 20 Dec 2024 at 06:27, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Here are some review comments for the patch v20241211-0004.

======
GENERAL

1.
There are more than a dozen places where the relation (relkind) is
checked to see if it is a SEQUENCE:

e.g. + get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
e.g. + if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
e.g. + if (relkind == RELKIND_SEQUENCE && !get_sequences)
e.g. + if (relkind != RELKIND_SEQUENCE && !get_tables)
e.g. + relkind == RELKIND_SEQUENCE ? "sequence" : "table",
e.g. + if (relkind != RELKIND_SEQUENCE)
e.g. + relkind == RELKIND_SEQUENCE ? "sequence" : "table",
e.g. + if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
e.g. + if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
e.g. + relkind != RELKIND_SEQUENCE)
e.g. + Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
e.g. + Assert(relkind == RELKIND_SEQUENCE);
e.g. + if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
e.g. + Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);

I am wondering if the code might be improved slightly by adding one new macro:

#define RELKIND_IS_SEQUENCE(relkind) ((relkind) == RELKIND_SEQUENCE)

I was not sure of this, as it is being done like that in other parts
of code like in aclchk.c, should we try to do this change as a
separate patch.

12.
- logicalrep_worker_stop(w->subid, w->relid);
+ /* Worker might have exited because of an error */
+ if (w->type == WORKERTYPE_UNKNOWN)
+ continue;
+
+ logicalrep_worker_stop(w->subid, w->relid, w->type);

It may be better to put that special case WORKERTYPE_UNKNOWN condition
as a quick exit within the logicalrep_worker_stop() function.

I have removed this as this will be handled by logicalrep_worker_stop
as the Assert is now removed.

======
src/backend/replication/logical/launcher.c

logicalrep_worker_find:

13.
+ Assert(wtype == WORKERTYPE_TABLESYNC ||
+    wtype == WORKERTYPE_SEQUENCESYNC ||
+    wtype == WORKERTYPE_APPLY);
+

The master version of this function checked for
isParallelApplyWorker(w), but now if a WORKERTYPE_PARALLEL_APPLY ever
got to here it would fail the above Assert. So, is the patch code OK,
or do we still need to also account for a possible
WORKERTYPE_PARALLEL_APPLY reaching here?

I have removed this assertion and kept it as in master code.

~~~

logicalrep_worker_stop:

14.
worker = logicalrep_worker_find(subid, relid, wtype, false);

if (worker)
{
Assert(!isParallelApplyWorker(worker));
logicalrep_worker_stop_internal(worker, SIGTERM);
}

~

This code is not changed much from before, so it first finds the
worker, but then asserts that it must not be not a parallel apply
worker. But now, since the wtype is known and passed to the function
why not move the Assert up-front based on the wtype and before even
doing the 'find'?

I prefer to keep it the way currently it is, as in_use also is
required to be checked apart from worker type.

FetchRelationStates:

23.
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
*/
bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(void)

Partly because of the name (relations), I felt this might be better to
be a void function and the returned value would be passed back by
references (bool *has_tables).

I did not want to add it as a function parameter, as this value is not
required in all the callers like the caller SyncProcessRelations.
Instead I have changed the variable to has_tables in this caller
function to avoid confusion.

======
src/backend/replication/logical/worker.c

SetupApplyOrSyncWorker:

24.
+
+ if (am_sequencesync_worker())
+ before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);

There should be a comment saying what this callback is for.

Function logicalrep_seqsyncworker_failuretime mentions it updates the
failure time. Function ProcessSyncingSequencesForApply mentions the
reason:
* To prevent starting the sequencesync worker at a high frequency after a
* failure, we store its last failure time. We start the sequencesync worker
* again after waiting at least wal_retrieve_retry_interval.

I felt this should be enough.

The rest of the comments are fixed. The attached patch has the changes
for the same.

Regards,
Vignesh

Attachments:

v202412125-0001-Introduce-pg_sequence_state-function-for-.patchtext/x-patch; charset=US-ASCII; name=v202412125-0001-Introduce-pg_sequence_state-function-for-.patchDownload
From bbaa5c771f94b1669453899b7652001a3e622d66 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v202412125 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 47370e581a..9bb896e9a5 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19690,6 +19690,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0188e8bbd5..aff2c1a6c0 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 2dcc2d42da..8598599686 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3375,6 +3375,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.43.0

v202412125-0005-Documentation-for-sequence-synchronizatio.patchtext/x-patch; charset=US-ASCII; name=v202412125-0005-Documentation-for-sequence-synchronizatio.patchDownload
From c897fc4d33b676b57bb0dad037bbb5b7596123a0 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v202412125 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  13 +-
 doc/src/sgml/logical-replication.sgml     | 223 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 355 insertions(+), 33 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index bf3cee08a9..532c573987 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8119,16 +8119,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8162,7 +8165,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fbdd6ce574..bec3cfe2f9 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5094,7 +5094,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        </para>
        <para>
         In logical replication, this parameter also limits how often a failing
-        replication apply worker will be respawned.
+        replication apply worker and sequence synchronization worker will be
+        respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5241,8 +5242,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5265,10 +5266,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 8290cd1a08..d9a133927f 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -1570,6 +1570,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   to synchronize the sequences after executing any of the above subscriber
+   commands, and will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1876,16 +2071,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2189,8 +2386,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2204,7 +2401,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d0d176cc54..3d841658cf 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..d40fc79e7b 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 6cf7d4f9a1..212ee8c16d 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a586156614..de82964f6c 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v202412125-0004-Enhance-sequence-synchronization-during-s.patchtext/x-patch; charset=US-ASCII; name=v202412125-0004-Enhance-sequence-synchronization-during-s.patchDownload
From 333584c0dd601de9c84f15fcd46aa3c40af72ba0 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 23 Dec 2024 12:22:05 +0530
Subject: [PATCH v202412125 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Log a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  27 +-
 src/backend/commands/subscriptioncmds.c       | 323 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 664 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/035_sequences.pl      | 215 ++++++
 27 files changed, 1513 insertions(+), 171 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/035_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 9fb98adb9d..eaea9a4577 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1336,3 +1336,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 394b7c5efe..225db0d03c 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index da9a8fe99f..d10c3d8b56 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index aff2c1a6c0..f7e001b9ae 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 03e97730e7..bbb962bc83 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +919,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +948,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +973,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +990,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1008,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1573,33 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1611,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1624,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1894,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2086,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * Check and log a warning if the publisher has subscribed to the same table
  * from some other publisher. This check is required only if "copy_data = true"
  * and "origin = none" for CREATE SUBSCRIPTION and
- * ALTER SUBSCRIPTION ... REFRESH statements to notify the user that data
- * having origin might have been copied.
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to notify the user
+ * that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2124,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2304,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 68deea50f6..f7ee2bbaf5 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -871,7 +871,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 40059e2930..9b56c68aa3 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10833,11 +10833,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 07bc5517fc..5988ea39bd 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index e5fdca8bbf..e61de6430a 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -235,19 +235,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -257,7 +256,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -317,6 +316,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -402,7 +402,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -489,7 +490,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -497,6 +498,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -612,13 +621,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -685,7 +694,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -815,6 +824,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -863,7 +903,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1178,7 +1218,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1314,7 +1354,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1357,6 +1397,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 2381c5f5d9..a1fc571ece 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..46f6fac3d1
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,664 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch == false)
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+	/* Return the LSN when the sequence state was set. */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	char		slotname[NAMEDATALEN];
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
+			 subid, GetSystemIdentifier());
+
+	/*
+	 * Here we use the slot name instead of the subscription name as the
+	 * application_name, so that it is different from the leader apply worker,
+	 * so that synchronous replication can distinguish them.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   slotname, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * COPY FROM does not honor RLS policies.  That is not a problem for
+		 * subscriptions owned by roles with BYPASSRLS privilege (or
+		 * superuser, who has it implicitly), but other roles should not be
+		 * able to circumvent RLS.  Disallow logical replication into RLS
+		 * enabled relations for such roles.
+		 */
+		if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
+			ereport(ERROR,
+					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
+						   GetUserNameFromId(GetUserId(), true),
+						   RelationGetRelationName(sequence_rel)));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = 0; i < curr_batch_seq; i++)
+			{
+				SubscriptionRelState *done_seq;
+
+				done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																		 (curr_seq - curr_batch_seq) + i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
+			}
+
+			if (mismatched_seqs->len)
+				sequence_sync_error = true;
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Setting
+	 * the failure time to prevent repeated initiation of the sequencesync
+	 * worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index cd7e26fad2..17f5d7ee94 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -51,8 +51,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -67,15 +69,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -90,7 +101,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -110,7 +123,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -122,17 +147,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -145,16 +175,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -162,7 +195,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +224,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9917fb8b25..7aff9abd50 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 0fe6512b4e..b7b3f72243 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4629,8 +4655,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4709,6 +4735,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4728,14 +4758,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4780,6 +4813,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8cf1afbad2..7260ab1763 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3258,7 +3258,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 4936f5bd68..13ab2d5c69 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2269,7 +2269,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8598599686..5524226dca 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12130,6 +12130,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 7637f67518..733d2e15b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,7 +4253,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 6504b70e4c..739a68174c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,29 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 3014d047fe..81ecbb989e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d929..66dcd71eef 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index b2395e7b57..993034bb2f 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -41,6 +41,7 @@ tests += {
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
+      't/035_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/035_sequences.pl b/src/test/subscription/t/035_sequences.pl
new file mode 100644
index 0000000000..0be44e2185
--- /dev/null
+++ b/src/test/subscription/t/035_sequences.pl
@@ -0,0 +1,215 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

v202412125-0003-Reorganize-tablesync-Code-and-Introduce-s.patchtext/x-patch; charset=US-ASCII; name=v202412125-0003-Reorganize-tablesync-Code-and-Introduce-s.patchDownload
From d60f1caceed785e592361682ab84030bc0ead878 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v202412125 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 191 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 234 insertions(+), 189 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 89bf5ec933..394b7c5efe 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..50f1639736 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3d36249d8a..2381c5f5d9 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..cd7e26fad2
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,191 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum representing the overall state of subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD The subscription relations state is no
+ * longer valid and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 7c8a0e9cfe..9917fb8b25 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +136,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1327,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1568,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1724,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1742,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 9e50c880f8..0fe6512b4e 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4778,7 +4778,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6504b70e4c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a2ec350326..0b9e9d0c08 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2806,7 +2806,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v202412125-0002-Introduce-ALL-SEQUENCES-support-for-Postg.patchtext/x-patch; charset=US-ASCII; name=v202412125-0002-Introduce-ALL-SEQUENCES-support-for-Postg.patchDownload
From 60b528dc9da2583a195d40c9a8ce7add0c5808f9 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Sun, 8 Dec 2024 13:37:31 +0000
Subject: [PATCH v202412125 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  65 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  14 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 252 +++++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 548 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 776 insertions(+), 341 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 5e25536554..9a19db863c 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object_type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -118,16 +123,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +154,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -261,10 +276,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -280,8 +295,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR TABLES IN SCHEMA</command>,
+   <command>FOR ALL TABLES</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -431,6 +447,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 9bbb60463f..9fb98adb9d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1041,6 +1042,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1063,6 +1100,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 5050057a7e..a201587475 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -786,11 +786,11 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create ALL TABLES and/or ALL SEQUENCES publication")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -824,6 +824,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1957,12 +1959,14 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of ALL TABLES and/or ALL SEQUENCES publication must be a superuser.")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 67eb96396a..40059e2930 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -258,6 +262,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -441,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -577,6 +582,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10553,7 +10559,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10573,13 +10584,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10691,6 +10702,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19489,6 +19522,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 19969e400f..a4540eafd0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4285,6 +4285,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4315,9 +4316,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBufferStr(query, "false AS pubgencols ");
+		appendPQExpBufferStr(query, "false AS pubgencols, false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4333,6 +4334,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4353,6 +4355,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4404,8 +4408,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 9c5ddd20cf..da7a0c1030 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -633,6 +633,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index aa1564cd45..62a6edcbd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2996,6 +2996,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 2657abdc72..f8bbcffd85 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1737,28 +1737,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1767,22 +1758,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1792,6 +1776,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1823,32 +1860,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
 
-		free(footers[0]);
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2071,6 +2129,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6318,7 +6382,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6335,13 +6399,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6447,6 +6518,19 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
+	int			puboid_col = -1,	/* column indexes in "res" */
+				pubname_col = -1,
+				pubowner_col = -1,
+				puballtables_col = -1,
+				puballsequences_col = -1,
+				pubins_col = -1,
+				pubupd_col = -1,
+				pubdel_col = -1,
+				pubtrunc_col = -1,
+				pubgen_col = -1,
+				pubviaroot_col = -1;
+	int			cols = 0;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6464,22 +6548,52 @@ describePublications(const char *pattern)
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+	puboid_col = cols++;
+	pubname_col = cols++;
+	pubowner_col = cols++;
+	puballtables_col = cols++;
+
+	if (has_pubsequence)
+	{
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+		puballsequences_col = cols++;
+	}
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+	pubins_col = cols++;
+	pubupd_col = cols++;
+	pubdel_col = cols++;
+
 	if (has_pubtruncate)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
+		pubtrunc_col = cols++;
+	}
+
 	if (has_pubgencols)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubgencols");
+		pubgen_col = cols++;
+	}
+
 	if (has_pubviaroot)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+		pubviaroot_col = cols++;
+	}
 
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
@@ -6523,9 +6637,9 @@ describePublications(const char *pattern)
 		const char	align = 'l';
 		int			ncols = 5;
 		int			nrows = 1;
-		char	   *pubid = PQgetvalue(res, i, 0);
-		char	   *pubname = PQgetvalue(res, i, 1);
-		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+		char	   *pubid = PQgetvalue(res, i, puboid_col);
+		char	   *pubname = PQgetvalue(res, i, pubname_col);
+		bool		puballtables = strcmp(PQgetvalue(res, i, puballtables_col), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
 		if (has_pubtruncate)
@@ -6534,6 +6648,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6541,6 +6657,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6551,17 +6669,19 @@ describePublications(const char *pattern)
 		if (has_pubviaroot)
 			printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
 
-		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubowner_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, puballtables_col), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, puballsequences_col), false, false);	/* all sequences */
+		printTableAddCell(&cont, PQgetvalue(res, i, pubins_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubupd_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubdel_col), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubtrunc_col), false, false);
 		if (has_pubgencols)
-			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubgen_col), false, false);
 		if (has_pubviaroot)
-			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubviaroot_col), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 31c77214b4..4936f5bd68 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3491,12 +3491,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index e2d894a2ff..12613d22e2 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -112,6 +118,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	bool		pubgencols;
 	PublicationActions pubactions;
@@ -147,6 +154,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 0f9462493e..7637f67518 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4185,6 +4185,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4192,6 +4208,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 36dc31c16c..76c38b2e0f 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6350,9 +6350,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index c48f11f293..96a51bf687 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = 'true', publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = 'foo');
 ERROR:  publish_generated_columns requires a Boolean value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f                 | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f                 | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f                 | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -764,10 +836,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f                 | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f                 | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -957,10 +1029,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1168,10 +1240,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1211,10 +1283,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1294,10 +1366,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1307,20 +1379,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f                 | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- adding schemas and tables
@@ -1336,19 +1408,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1362,44 +1434,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1433,10 +1505,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1444,20 +1516,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1465,10 +1537,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1477,10 +1549,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1489,10 +1561,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1500,10 +1572,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1511,10 +1583,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1522,29 +1594,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1553,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1565,10 +1637,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1647,18 +1719,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1668,20 +1740,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1801,18 +1873,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=1);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | t                 | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns=0);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1823,50 +1895,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'=false
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns=false);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'=true
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns=true);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publication_generate_columns'=false
 ALTER PUBLICATION pub2 SET (publish_generated_columns = false);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c4c21a95d0..74d12ca2d1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index fbdb932e6b..a2ec350326 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2262,6 +2262,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

#183vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#179)
Re: Logical Replication of sequences

On Fri, 20 Dec 2024 at 08:05, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh.

Here are some review comments for the patch v20241211-0005.

======

Section "29.6.3. Examples"

2.
Should the Examples section also have an example of ALTER SUBSCRIPTION
... REFRESH PUBLICATION to demonstrate (like in the TAP tests) that if
the sequences are already known, then those are not synchronised?

I felt it is not required, let's not add too many examples.

Section "29.8. Restrictions"

3.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either
by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.

I don't know if you need to mention it, or maybe it is too obvious,
but the suggestion here to use "ALTER SUBSCRIPTION ... REFRESH
PUBLICATION SEQUENCES" assumed you've already arranged for the
PUBLICATION to be publishing sequences before this.

I felt it is obvious, it need not be mentioned.

======
doc/src/sgml/ref/alter_subscription.sgml

4.
<para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and
synchronize
+          sequences in the publications that are being subscribed to
when the replication
+          starts. The default is <literal>true</literal>.
</para>

This is talking also about "synchronize sequences" when the
replication starts, but it is a bit confusing. IIUC, the .. REFRESH
PUBLICATION only synchronizes *newly added* sequences anyway, so does
it mean even that will not happen if copy_data=false?

I think this option needs more clarification on how it interacts with
sequences. Also, I don't recall seeing any test for sequences and
copy_data in the patch 0004 TAP tests, so maybe something needs to be
added there too.

This case is similar to how tables are handled, that is add the table
in subscription_rel and mark it as ready without updating the data.
Since tables and sequences behave the same way I have kept the
documentation the same. I have added a test for this in the 0004 TAP
test.

The rest of the comments are fixed and the changes for the same are
available in the v202412125 version patch at [1]/messages/by-id/CALDaNm0PbSAQvs34D+J63SgmRUzDQHZ1W4aeW_An9pR_tXJnRA@mail.gmail.com.
[1]: /messages/by-id/CALDaNm0PbSAQvs34D+J63SgmRUzDQHZ1W4aeW_An9pR_tXJnRA@mail.gmail.com

Regards,
Vignesh

#184Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#182)
RE: Logical Replication of sequences

Dear Vignesh,

Thanks for updating the patch! Here are my comments.

01. SyncingRelationsState

```
* SYNC_RELATIONS_STATE_NEEDS_REBUILD The subscription relations state is no
* longer valid and the subscription relations should be rebuilt.
```

Can we follow the style like other lines? Like:

SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
state is no longer valid and the subscription relations should be rebuilt.

02. FetchRelationStates()

```
/* Fetch tables and sequences that are in non-ready state. */
rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
false);
```

I think rstates should be list_free_deep()'d after the foreach().

03. LogicalRepSyncSequences

```
char slotname[NAMEDATALEN];
...
snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
subid, GetSystemIdentifier());

/*
* Here we use the slot name instead of the subscription name as the
* application_name, so that it is different from the leader apply worker,
* so that synchronous replication can distinguish them.
*/
LogRepWorkerWalRcvConn =
walrcv_connect(MySubscription->conninfo, true, true,
must_use_password,
slotname, &err);
```

Hmm, IIUC the sequence sync worker does not require any replication slots.
I feel the variable name should be "application_name" and the comment can be updated.

04. LogicalRepSyncSequences

```
/* Get the sequences that should be synchronized. */
sequences = GetSubscriptionRelations(subid, false, true, false);
```

I think rstates should be list_free_deep()'d after the foreach_ptr().

05. LogicalRepSyncSequences

```
/*
* COPY FROM does not honor RLS policies. That is not a problem for
* subscriptions owned by roles with BYPASSRLS privilege (or
* superuser, who has it implicitly), but other roles should not be
* able to circumvent RLS. Disallow logical replication into RLS
* enabled relations for such roles.
*/
if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
ereport(ERROR,
errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
GetUserNameFromId(GetUserId(), true),
RelationGetRelationName(sequence_rel)));
```

Can we really set a row level security policy to sequences? I've tested but
I couldn't.

```
postgres=# CREATE SEQUENCE s;
CREATE SEQUENCE
postgres=# ALTER TABLE s ENABLE ROW LEVEL SECURITY ;
ERROR: ALTER action ENABLE ROW SECURITY cannot be performed on relation "s"
DETAIL: This operation is not supported for sequences.
```

06. copy_sequence

```
appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
"FROM pg_catalog.pg_class c\n"
"INNER JOIN pg_catalog.pg_namespace n\n"
" ON (c.relnamespace = n.oid)\n"
"WHERE n.nspname = %s AND c.relname = %s",
quote_literal_cstr(nspname),
quote_literal_cstr(relname));
```

I feel the function is not so efficient because it can obtain only a tuple, i.e.,
information for one sequence at once. I think you basically copied from fetch_remote_table_info(),
but it was OK for it because the tablesync worker handles only a table.

Can you obtain all sequences at once and check whether each sequences match or not?

07. LogicalRepSyncSequences

```
/* LOG all the sequences synchronized during current batch. */
for (int i = 0; i < curr_batch_seq; i++)
{
SubscriptionRelState *done_seq;

done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
(curr_seq - curr_batch_seq) + i));

ereport(DEBUG1,
errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
}
```

The loop is needed only when the debug messages should be output the system.
Can we use message_level_is_interesting() to skip the loop for some cases?

08. pg_dump.c

```
/*
* getSubscriptionTables
* Get information about subscription membership for dumpable tables. This
* will be used only in binary-upgrade mode for PG17 or later versions.
*/
void
getSubscriptionTables(Archive *fout)
```

I was bit confused of the pg_dump codes. I doubt that the pg_upgrade might not
be able to transfer pg_subscripion_rel entries of sequences, but it seemed to work
well because sequences are handled mostly same as normal tables on the pg_dump layer.
But based on other codes, the function should be "getSubscriptionRelations" and
comments should be also updated.

09. logical-replication.sgml

I feel we can add descriptions in "Publication" section. E.g.:

Publications can also include sequences, but the behavior differs from a table
or a group of tables: users can synchronize current states of sequences at an
arbitrary timing. For more details, please see "Replicating Sequences".

10. pg_proc.dat

```
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
```

I think this is not wrong, but according to the src/include/catalog/unused_oids,
the oid should be at 8000-9999 while developing:

```
$ ./unused_oids
...
Patches should use a more-or-less consecutive range of OIDs.
Best practice is to start with a random choice in the range 8000-9999.
Suggested random unused OID: 9293 (415 consecutive OID(s) available starting here)
```

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#185vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#184)
5 attachment(s)
Re: Logical Replication of sequences

On Fri, 27 Dec 2024 at 15:18, Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Dear Vignesh,

Thanks for updating the patch! Here are my comments.

01. SyncingRelationsState

```
* SYNC_RELATIONS_STATE_NEEDS_REBUILD The subscription relations state is no
* longer valid and the subscription relations should be rebuilt.
```

Can we follow the style like other lines? Like:

SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
state is no longer valid and the subscription relations should be rebuilt.

Modfied

02. FetchRelationStates()

```
/* Fetch tables and sequences that are in non-ready state. */
rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
false);
```

I think rstates should be list_free_deep()'d after the foreach().

Since rstates is allocated in TopTransactionContext and there is a
CommitTransactionCommand which will take care of releasing the memory
from AtCommit_Memory:
/*
* Release all transaction-local memory. TopTransactionContext survives
* but becomes empty; any sub-contexts go away.
*/
Assert(TopTransactionContext != NULL);
MemoryContextReset(TopTransactionContext);

So I felt deep free is not required in this case.

03. LogicalRepSyncSequences

```
char slotname[NAMEDATALEN];
...
snprintf(slotname, NAMEDATALEN, "pg_%u_sync_sequences_" UINT64_FORMAT,
subid, GetSystemIdentifier());

/*
* Here we use the slot name instead of the subscription name as the
* application_name, so that it is different from the leader apply worker,
* so that synchronous replication can distinguish them.
*/
LogRepWorkerWalRcvConn =
walrcv_connect(MySubscription->conninfo, true, true,
must_use_password,
slotname, &err);
```

Hmm, IIUC the sequence sync worker does not require any replication slots.
I feel the variable name should be "application_name" and the comment can be updated.

Modified

04. LogicalRepSyncSequences

```
/* Get the sequences that should be synchronized. */
sequences = GetSubscriptionRelations(subid, false, true, false);
```

I think rstates should be list_free_deep()'d after the foreach_ptr().

Since rstates is allocated in TopTransactionContext and there is a
CommitTransactionCommand immediately which will take care of releasing
the memory from AtCommit_Memory:
/*
* Release all transaction-local memory. TopTransactionContext survives
* but becomes empty; any sub-contexts go away.
*/
Assert(TopTransactionContext != NULL);
MemoryContextReset(TopTransactionContext);

So I felt deep free is not required in this case.

05. LogicalRepSyncSequences

```
/*
* COPY FROM does not honor RLS policies. That is not a problem for
* subscriptions owned by roles with BYPASSRLS privilege (or
* superuser, who has it implicitly), but other roles should not be
* able to circumvent RLS. Disallow logical replication into RLS
* enabled relations for such roles.
*/
if (check_enable_rls(RelationGetRelid(sequence_rel), InvalidOid, false) == RLS_ENABLED)
ereport(ERROR,
errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("user \"%s\" cannot replicate into sequence with row-level security enabled: \"%s\"",
GetUserNameFromId(GetUserId(), true),
RelationGetRelationName(sequence_rel)));
```

Can we really set a row level security policy to sequences? I've tested but
I couldn't.

```
postgres=# CREATE SEQUENCE s;
CREATE SEQUENCE
postgres=# ALTER TABLE s ENABLE ROW LEVEL SECURITY ;
ERROR: ALTER action ENABLE ROW SECURITY cannot be performed on relation "s"
DETAIL: This operation is not supported for sequences.
```

Removed this check

06. copy_sequence

```
appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
"FROM pg_catalog.pg_class c\n"
"INNER JOIN pg_catalog.pg_namespace n\n"
" ON (c.relnamespace = n.oid)\n"
"WHERE n.nspname = %s AND c.relname = %s",
quote_literal_cstr(nspname),
quote_literal_cstr(relname));
```

I feel the function is not so efficient because it can obtain only a tuple, i.e.,
information for one sequence at once. I think you basically copied from fetch_remote_table_info(),
but it was OK for it because the tablesync worker handles only a table.

Can you obtain all sequences at once and check whether each sequences match or not?

I still could not come up with a simple way to do this, I will think
more and handle this in the next version.

07. LogicalRepSyncSequences

```
/* LOG all the sequences synchronized during current batch. */
for (int i = 0; i < curr_batch_seq; i++)
{
SubscriptionRelState *done_seq;

done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
(curr_seq - curr_batch_seq) + i));

ereport(DEBUG1,
errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
get_subscription_name(subid, false), get_rel_name(done_seq->relid)));
}
```

The loop is needed only when the debug messages should be output the system.
Can we use message_level_is_interesting() to skip the loop for some cases?

Modified

08. pg_dump.c

```
/*
* getSubscriptionTables
* Get information about subscription membership for dumpable tables. This
* will be used only in binary-upgrade mode for PG17 or later versions.
*/
void
getSubscriptionTables(Archive *fout)
```

I was bit confused of the pg_dump codes. I doubt that the pg_upgrade might not
be able to transfer pg_subscripion_rel entries of sequences, but it seemed to work
well because sequences are handled mostly same as normal tables on the pg_dump layer.
But based on other codes, the function should be "getSubscriptionRelations" and
comments should be also updated.

Modified

09. logical-replication.sgml

I feel we can add descriptions in "Publication" section. E.g.:

Publications can also include sequences, but the behavior differs from a table
or a group of tables: users can synchronize current states of sequences at an
arbitrary timing. For more details, please see "Replicating Sequences".

Added this

10. pg_proc.dat

```
+{ oid => '6313',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
```

I think this is not wrong, but according to the src/include/catalog/unused_oids,
the oid should be at 8000-9999 while developing:

```
$ ./unused_oids
...
Patches should use a more-or-less consecutive range of OIDs.
Best practice is to start with a random choice in the range 8000-9999.
Suggested random unused OID: 9293 (415 consecutive OID(s) available starting here)
```

Modified

Thanks for the comments, the attached patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20241230-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20241230-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From e4aaca497df49b65f162e7c596911f20939e5166 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20241230 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 47370e581a..9bb896e9a5 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19690,6 +19690,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0188e8bbd5..aff2c1a6c0 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 2dcc2d42da..e150d0ba71 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3375,6 +3375,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.43.0

v20241230-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20241230-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From eea30f200b5bff3fbbdc91bb82ca3673c7bf4d49 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20241230 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 191 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 234 insertions(+), 189 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 89bf5ec933..394b7c5efe 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index e7f7d4c5e4..50f1639736 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 3d36249d8a..2381c5f5d9 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..0311325fb4
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,191 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum representing the overall state of subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 7c8a0e9cfe..9917fb8b25 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +136,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1327,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1568,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1724,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1742,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 9e50c880f8..0fe6512b4e 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4778,7 +4778,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8244ad537a..f11eea6824 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 9646261d7e..6504b70e4c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index cb8e04660b..a03670f264 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2807,7 +2807,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20241230-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20241230-0005-Documentation-for-sequence-synchronization.patchDownload
From 8a63e31bfbb19167cbaea2d3d5097e22af07b98a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20241230 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  13 +-
 doc/src/sgml/logical-replication.sgml     | 229 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 360 insertions(+), 34 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cc6cf9bef0..e8e1e57255 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8108,16 +8108,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8151,7 +8154,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fbdd6ce574..bec3cfe2f9 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5094,7 +5094,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        </para>
        <para>
         In logical replication, this parameter also limits how often a failing
-        replication apply worker will be respawned.
+        replication apply worker and sequence synchronization worker will be
+        respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5241,8 +5242,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5265,10 +5266,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 8290cd1a08..dfa2e9dc9f 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -111,7 +111,11 @@
    accessed.  Each table can be added to multiple publications if needed.
    Publications may currently only contain tables and all tables in schema.
    Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   <literal>ALL TABLES</literal>. Publications can include sequences as well,
+   but their behavior differs from that of tables or groups of tables. Unlike
+   tables, sequences allow users to synchronize their current state at any
+   given time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1570,6 +1574,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   to synchronize the sequences after executing any of the above subscriber
+   commands, and will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1876,16 +2075,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2189,8 +2390,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2204,7 +2405,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d0d176cc54..3d841658cf 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..d40fc79e7b 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 6cf7d4f9a1..212ee8c16d 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a586156614..de82964f6c 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20241230-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20241230-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 9e0012d025f13c165ccb8deda8fd469fdf57fd36 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 23 Dec 2024 12:22:05 +0530
Subject: [PATCH v20241230 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Log a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  27 +-
 src/backend/commands/subscriptioncmds.c       | 323 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 654 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   6 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/035_sequences.pl      | 215 ++++++
 30 files changed, 1509 insertions(+), 177 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/035_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 9fb98adb9d..eaea9a4577 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1336,3 +1336,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 394b7c5efe..225db0d03c 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index da9a8fe99f..d10c3d8b56 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index aff2c1a6c0..f7e001b9ae 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 03e97730e7..bbb962bc83 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +919,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +948,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +973,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +990,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1008,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1573,33 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1611,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1624,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1894,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2086,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * Check and log a warning if the publisher has subscribed to the same table
  * from some other publisher. This check is required only if "copy_data = true"
  * and "origin = none" for CREATE SUBSCRIPTION and
- * ALTER SUBSCRIPTION ... REFRESH statements to notify the user that data
- * having origin might have been copied.
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to notify the user
+ * that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2124,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2304,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 68deea50f6..f7ee2bbaf5 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -871,7 +871,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 96d01ad37f..5699efe4f5 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10833,11 +10833,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 7afe56885c..2ac54105af 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 8b19642044..6f84fb7f7c 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -480,7 +481,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -488,6 +489,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1348,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 2381c5f5d9..a1fc571ece 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..b23c65852a
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,654 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch == false)
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+	/* Return the LSN when the sequence state was set. */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+				sequence_sync_error = true;
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Setting
+	 * the failure time to prevent repeated initiation of the sequencesync
+	 * worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 0311325fb4..51a58bddca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -51,8 +51,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -67,15 +69,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -90,7 +101,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -110,7 +123,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -122,17 +147,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -145,16 +175,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -162,7 +195,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +224,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9917fb8b25..7aff9abd50 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 0fe6512b4e..b7b3f72243 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4629,8 +4655,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4709,6 +4735,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4728,14 +4758,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4780,6 +4813,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8cf1afbad2..7260ab1763 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3258,7 +3258,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 33d323085f..80c3547138 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index a4540eafd0..bfb1f927a7 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5044,12 +5044,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index da7a0c1030..72aeef0009 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -790,6 +790,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 4936f5bd68..13ab2d5c69 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2269,7 +2269,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index e150d0ba71..7c766ee224 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12130,6 +12130,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f11eea6824..0324ae5cea 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index e88cbee3b5..adafa440e3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 7637f67518..733d2e15b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,7 +4253,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index a18d79d1b2..47a3326ad3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 6504b70e4c..739a68174c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,29 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 3014d047fe..81ecbb989e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d929..66dcd71eef 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index b2395e7b57..993034bb2f 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -41,6 +41,7 @@ tests += {
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
+      't/035_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/035_sequences.pl b/src/test/subscription/t/035_sequences.pl
new file mode 100644
index 0000000000..0be44e2185
--- /dev/null
+++ b/src/test/subscription/t/035_sequences.pl
@@ -0,0 +1,215 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

v20241230-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20241230-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 8263f28a40e01fa4f100b02f2f02b50b72f3ac52 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Sun, 8 Dec 2024 13:37:31 +0000
Subject: [PATCH v20241230 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  65 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  14 +-
 src/backend/parser/gram.y                 |  82 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 252 +++++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 548 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 776 insertions(+), 341 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 5e25536554..9a19db863c 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,10 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
+    [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
       | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
+<phrase>where <replaceable class="parameter">object_type</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
+
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
@@ -118,16 +123,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +154,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -261,10 +276,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -280,8 +295,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR TABLES IN SCHEMA</command>,
+   <command>FOR ALL TABLES</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -431,6 +447,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 9bbb60463f..9fb98adb9d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1041,6 +1042,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1063,6 +1100,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 5050057a7e..a201587475 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -786,11 +786,11 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create ALL TABLES and/or ALL SEQUENCES publication")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -824,6 +824,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1957,12 +1959,14 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of ALL TABLES and/or ALL SEQUENCES publication must be a superuser.")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index bd5ebb35c4..96d01ad37f 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -258,6 +262,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -441,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -577,6 +582,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10553,7 +10559,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10573,13 +10584,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10691,6 +10702,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19490,6 +19523,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 19969e400f..a4540eafd0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4285,6 +4285,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4315,9 +4316,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBufferStr(query, "false AS pubgencols ");
+		appendPQExpBufferStr(query, "false AS pubgencols, false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4333,6 +4334,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4353,6 +4355,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4404,8 +4408,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 9c5ddd20cf..da7a0c1030 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -633,6 +633,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index aa1564cd45..62a6edcbd7 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2996,6 +2996,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 2657abdc72..f8bbcffd85 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1737,28 +1737,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1767,22 +1758,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1792,6 +1776,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1823,32 +1860,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
 
-		free(footers[0]);
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2071,6 +2129,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6318,7 +6382,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6335,13 +6399,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6447,6 +6518,19 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
+	int			puboid_col = -1,	/* column indexes in "res" */
+				pubname_col = -1,
+				pubowner_col = -1,
+				puballtables_col = -1,
+				puballsequences_col = -1,
+				pubins_col = -1,
+				pubupd_col = -1,
+				pubdel_col = -1,
+				pubtrunc_col = -1,
+				pubgen_col = -1,
+				pubviaroot_col = -1;
+	int			cols = 0;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6464,22 +6548,52 @@ describePublications(const char *pattern)
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+	puboid_col = cols++;
+	pubname_col = cols++;
+	pubowner_col = cols++;
+	puballtables_col = cols++;
+
+	if (has_pubsequence)
+	{
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+		puballsequences_col = cols++;
+	}
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+	pubins_col = cols++;
+	pubupd_col = cols++;
+	pubdel_col = cols++;
+
 	if (has_pubtruncate)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
+		pubtrunc_col = cols++;
+	}
+
 	if (has_pubgencols)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubgencols");
+		pubgen_col = cols++;
+	}
+
 	if (has_pubviaroot)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+		pubviaroot_col = cols++;
+	}
 
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
@@ -6523,9 +6637,9 @@ describePublications(const char *pattern)
 		const char	align = 'l';
 		int			ncols = 5;
 		int			nrows = 1;
-		char	   *pubid = PQgetvalue(res, i, 0);
-		char	   *pubname = PQgetvalue(res, i, 1);
-		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+		char	   *pubid = PQgetvalue(res, i, puboid_col);
+		char	   *pubname = PQgetvalue(res, i, pubname_col);
+		bool		puballtables = strcmp(PQgetvalue(res, i, puballtables_col), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
 		if (has_pubtruncate)
@@ -6534,6 +6648,8 @@ describePublications(const char *pattern)
 			ncols++;
 		if (has_pubviaroot)
 			ncols++;
+		if (has_pubsequence)
+			ncols++;
 
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -6541,6 +6657,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6551,17 +6669,19 @@ describePublications(const char *pattern)
 		if (has_pubviaroot)
 			printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
 
-		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubowner_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, puballtables_col), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, puballsequences_col), false, false);	/* all sequences */
+		printTableAddCell(&cont, PQgetvalue(res, i, pubins_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubupd_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubdel_col), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubtrunc_col), false, false);
 		if (has_pubgencols)
-			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubgen_col), false, false);
 		if (has_pubviaroot)
-			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubviaroot_col), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 31c77214b4..4936f5bd68 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3491,12 +3491,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index e2d894a2ff..12613d22e2 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -112,6 +118,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	bool		pubgencols;
 	PublicationActions pubactions;
@@ -147,6 +154,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 0f9462493e..7637f67518 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4185,6 +4185,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4192,6 +4208,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 36dc31c16c..76c38b2e0f 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6350,9 +6350,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index c48f11f293..96a51bf687 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = 'true', publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = 'foo');
 ERROR:  publish_generated_columns requires a Boolean value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f                 | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f                 | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f                 | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  TABLES can be specified only once.
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -764,10 +836,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f                 | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f                 | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -957,10 +1029,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1168,10 +1240,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1211,10 +1283,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1294,10 +1366,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1307,20 +1379,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f                 | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- adding schemas and tables
@@ -1336,19 +1408,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1362,44 +1434,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1433,10 +1505,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1444,20 +1516,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1465,10 +1537,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1477,10 +1549,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1489,10 +1561,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1500,10 +1572,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1511,10 +1583,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1522,29 +1594,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1553,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1565,10 +1637,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1647,18 +1719,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1668,20 +1740,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1801,18 +1873,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=1);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | t                 | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns=0);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1823,50 +1895,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'=false
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns=false);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'=true
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns=true);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publication_generate_columns'=false
 ALTER PUBLICATION pub2 SET (publish_generated_columns = false);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c4c21a95d0..74d12ca2d1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- FOR ALL specifying both TABLES and SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - FOR ALL specifying TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - FOR ALL specifying SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e1c4f913f8..cb8e04660b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2263,6 +2263,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

#186Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#185)
Re: Logical Replication of sequences

Hi Vignesh.

Here are some review comments for patch v20241230-0002

======
1. SYNTAX

The proposed syntax is currently:

CREATE PUBLICATION name
[ FOR ALL object_type [, ...]
| FOR publication_object [, ... ] ]
[ WITH ( publication_parameter [= value] [, ... ] ) ]

where object_type is one of:

TABLES
SEQUENCES

where publication_object is one of:

TABLE [ ONLY ] table_name [ * ] [ ( column_name [, ... ] ) ] [
WHERE ( expression ) ] [, ... ]
TABLES IN SCHEMA { schema_name | CURRENT_SCHEMA } [, ... ]
~

But lately, I've been thinking it could be clearer if you removed the
object_type and instead fully spelled out FOR ALL TABLES and/or FOR
ALL SEQUENCES.

compare
CREATE PUBLICATION FOR ALL TABLES, SEQUENCES;
versus
CREATE PUBLICATION FOR ALL TABLES, ALL SEQUENCES;

~

Also AFAICT, the current syntax says it is impossible to mix FOR ALL
SEQUENCES with FOR TABLE etc but really that *should* be allowed,
right?

And it looks like you may come to similar grief in future if you try
things like:
"FOR ALL TABLES" mixed with "FOR SEQUENCE seq_name"
"FOR ALL TABLES" mixed with "FOR SEQUENCES IN SCHEMA schema_name"

~

So, maybe a revised syntax like below would end up being easier and
also more flexible:

CREATE PUBLICATION name
[ FOR publication_object [, ... ] ]
[ WITH ( publication_parameter [= value] [, ... ] ) ]

where publication_object is one of:

ALL TABLES
ALL SEQUENCES
TABLE [ ONLY ] table_name [ * ] [ ( column_name [, ... ] ) ] [
WHERE ( expression ) ] [, ... ]
TABLES IN SCHEMA { schema_name | CURRENT_SCHEMA } [, ... ]

======
src/backend/commands/publicationcmds.c

CreatePublication:

2.
- /* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ /* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+ if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
  ereport(ERROR,
  (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- errmsg("must be superuser to create FOR ALL TABLES publication")));
+ errmsg("must be superuser to create ALL TABLES and/or ALL SEQUENCES
publication")));

2a.
Typo.

/create ALL TABLES and/or ALL SEQUENCES publication/create a FOR ALL
TABLES and/or a FOR ALL SEQUENCES publication/

~

2b.
This message might be OK now, but I suspect it will become very messy
in future after you introduce another syntax like "FOR SEQUENCE
seq_name" etc (which would also be able to be used in combination with
a FOR ALL TABLES).

So, I think that for future-proofing against all the possible (future)
combinations, and for keeping the code cleaner, it will be far simpler
to just keep the errors for tables and sequences separated:

SUGGESTION:
if (!superuser())
{
if (stmt->for_all_tables)
ereport(ERROR, ... FOR ALL TABLES ...);
if (stmt->for_all_sequences)
ereport(ERROR, ... FOR ALL SEQUENCES ...);
}

~~~

AlterPublicationOwner_internal:

3.
- if (form->puballtables && !superuser_arg(newOwnerId))
+ /* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+ if ((form->puballtables || form->puballsequences) &&
+ !superuser_arg(newOwnerId))
  ereport(ERROR,
  (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
  errmsg("permission denied to change owner of publication \"%s\"",
  NameStr(form->pubname)),
- errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+ errhint("The owner of ALL TABLES and/or ALL SEQUENCES publication
must be a superuser.")));

Ditto the above comment #2.

======
src/bin/psql/describe.c

4.
+ puboid_col = cols++;
+ pubname_col = cols++;
+ pubowner_col = cols++;
+ puballtables_col = cols++;
+
+ if (has_pubsequence)
+ {
+ appendPQExpBufferStr(&buf,
+ ", puballsequences");
+ puballsequences_col = cols++;
+ }
+
+ appendPQExpBufferStr(&buf,
+ ", pubinsert, pubupdate, pubdelete");
+ pubins_col = cols++;
+ pubupd_col = cols++;
+ pubdel_col = cols++;
+
  if (has_pubtruncate)
+ {
  appendPQExpBufferStr(&buf,
  ", pubtruncate");
+ pubtrunc_col = cols++;
+ }
+
  if (has_pubgencols)
+ {
  appendPQExpBufferStr(&buf,
  ", pubgencols");
+ pubgen_col = cols++;
+ }
+
  if (has_pubviaroot)
+ {
  appendPQExpBufferStr(&buf,
  ", pubviaroot");
+ pubviaroot_col = cols++;
+ }

There is some overlap/duplication of the new variable 'cols' and the
existing variable 'ncols'.

AFAICT you can just move/replace the declaration of 'ncols' to where
'cols' is declared, and then you can remove the duplicated code below
(because the above code is already doing the same thing).

if (has_pubtruncate)
ncols++;
if (has_pubgencols)
ncols++;
if (has_pubviaroot)
ncols++;
if (has_pubsequence)
ncols++;

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#187Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#185)
Re: Logical Replication of sequences

Hi Vignesh,

Some minor review comments for the patch v20241230-0003.

======
src/backend/replication/logical/syncutils.c

1.
+ * syncutils.c
+ *   PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group

Happy New Year.

s/2024/2025/

~~~

2.
+/*
+ * Enum representing the overall state of subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that subscription relation state is
+ * up-to-date and valid.
+ */

2a.
That first sentence saying "overall state of [...] state" is a bit strange.

Maybe it can be reworded something like:
Enum for phases of the subscription relations state.

~

2b.
/is no longer valid and/is no longer valid, and/

`

2c.
/that subscription relation state is up-to-date/that the subscription
relation state is up-to-date/

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#188vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#186)
5 attachment(s)
Re: Logical Replication of sequences

On Fri, 3 Jan 2025 at 06:53, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh.

Here are some review comments for patch v20241230-0002

======
1. SYNTAX

The proposed syntax is currently:

CREATE PUBLICATION name
[ FOR ALL object_type [, ...]
| FOR publication_object [, ... ] ]
[ WITH ( publication_parameter [= value] [, ... ] ) ]

where object_type is one of:

TABLES
SEQUENCES

where publication_object is one of:

TABLE [ ONLY ] table_name [ * ] [ ( column_name [, ... ] ) ] [
WHERE ( expression ) ] [, ... ]
TABLES IN SCHEMA { schema_name | CURRENT_SCHEMA } [, ... ]
~

But lately, I've been thinking it could be clearer if you removed the
object_type and instead fully spelled out FOR ALL TABLES and/or FOR
ALL SEQUENCES.

compare
CREATE PUBLICATION FOR ALL TABLES, SEQUENCES;
versus
CREATE PUBLICATION FOR ALL TABLES, ALL SEQUENCES;

~

Also AFAICT, the current syntax says it is impossible to mix FOR ALL
SEQUENCES with FOR TABLE etc but really that *should* be allowed,
right?

And it looks like you may come to similar grief in future if you try
things like:
"FOR ALL TABLES" mixed with "FOR SEQUENCE seq_name"
"FOR ALL TABLES" mixed with "FOR SEQUENCES IN SCHEMA schema_name"

~

So, maybe a revised syntax like below would end up being easier and
also more flexible:

CREATE PUBLICATION name
[ FOR publication_object [, ... ] ]
[ WITH ( publication_parameter [= value] [, ... ] ) ]

where publication_object is one of:

ALL TABLES
ALL SEQUENCES
TABLE [ ONLY ] table_name [ * ] [ ( column_name [, ... ] ) ] [
WHERE ( expression ) ] [, ... ]
TABLES IN SCHEMA { schema_name | CURRENT_SCHEMA } [, ... ]

The proposed be more easier to extend the syntax in the future, so
modified.The proposed be more easier to extend the syntax in the
future, so modified.

======
src/backend/commands/publicationcmds.c

CreatePublication:

2.
- /* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ /* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+ if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- errmsg("must be superuser to create FOR ALL TABLES publication")));
+ errmsg("must be superuser to create ALL TABLES and/or ALL SEQUENCES
publication")));

2a.
Typo.

/create ALL TABLES and/or ALL SEQUENCES publication/create a FOR ALL
TABLES and/or a FOR ALL SEQUENCES publication/

Modified

~

2b.
This message might be OK now, but I suspect it will become very messy
in future after you introduce another syntax like "FOR SEQUENCE
seq_name" etc (which would also be able to be used in combination with
a FOR ALL TABLES).

So, I think that for future-proofing against all the possible (future)
combinations, and for keeping the code cleaner, it will be far simpler
to just keep the errors for tables and sequences separated:

SUGGESTION:
if (!superuser())
{
if (stmt->for_all_tables)
ereport(ERROR, ... FOR ALL TABLES ...);
if (stmt->for_all_sequences)
ereport(ERROR, ... FOR ALL SEQUENCES ...);
}

If we do that way it will not print both the stmt publication type if
both "ALL TABLES" and "ALL SEQUENCES" is specified.

~~~

AlterPublicationOwner_internal:

3.
- if (form->puballtables && !superuser_arg(newOwnerId))
+ /* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+ if ((form->puballtables || form->puballsequences) &&
+ !superuser_arg(newOwnerId))
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied to change owner of publication \"%s\"",
NameStr(form->pubname)),
- errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+ errhint("The owner of ALL TABLES and/or ALL SEQUENCES publication
must be a superuser.")));

Ditto the above comment #2.

Modified the message

======
src/bin/psql/describe.c

4.
+ puboid_col = cols++;
+ pubname_col = cols++;
+ pubowner_col = cols++;
+ puballtables_col = cols++;
+
+ if (has_pubsequence)
+ {
+ appendPQExpBufferStr(&buf,
+ ", puballsequences");
+ puballsequences_col = cols++;
+ }
+
+ appendPQExpBufferStr(&buf,
+ ", pubinsert, pubupdate, pubdelete");
+ pubins_col = cols++;
+ pubupd_col = cols++;
+ pubdel_col = cols++;
+
if (has_pubtruncate)
+ {
appendPQExpBufferStr(&buf,
", pubtruncate");
+ pubtrunc_col = cols++;
+ }
+
if (has_pubgencols)
+ {
appendPQExpBufferStr(&buf,
", pubgencols");
+ pubgen_col = cols++;
+ }
+
if (has_pubviaroot)
+ {
appendPQExpBufferStr(&buf,
", pubviaroot");
+ pubviaroot_col = cols++;
+ }

There is some overlap/duplication of the new variable 'cols' and the
existing variable 'ncols'.

AFAICT you can just move/replace the declaration of 'ncols' to where
'cols' is declared, and then you can remove the duplicated code below
(because the above code is already doing the same thing).

if (has_pubtruncate)
ncols++;
if (has_pubgencols)
ncols++;
if (has_pubviaroot)
ncols++;
if (has_pubsequence)
ncols++;

I have removed ncols and used cols.

The attached patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250204-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250204-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 53adb85a7115d1213ff7f5cd428f1d2c444972ed Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250204 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 47370e581a..9bb896e9a5 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19690,6 +19690,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index b13ee2b745..773f4a182e 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b37e8a6f88..fdad30dfab 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3375,6 +3375,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.43.0

v20250204-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250204-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From ced6b2d94e8bbe64d71943d78add3402fe7cadc7 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Sun, 8 Dec 2024 13:37:31 +0000
Subject: [PATCH v20250204 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  14 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 265 ++++++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 548 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 778 insertions(+), 352 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 5e25536554..0a17cc4e3f 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,12 +22,13 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
+    ALL TABLES
+    ALL SEQUENCES
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
 </synopsis>
@@ -118,16 +119,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +150,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -261,10 +272,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -280,8 +291,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR TABLES IN SCHEMA</command>,
+   <command>FOR ALL TABLES</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -431,6 +443,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b89098f5e9..4928a3417d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1041,6 +1042,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1063,6 +1100,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 35747b3df5..6bd10ed14f 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -786,11 +786,11 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
+	/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+	if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
 		ereport(ERROR,
 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+				 errmsg("must be superuser to create a FOR ALL TABLES and/or a FOR ALL SEQUENCES publication")));
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -824,6 +824,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1957,12 +1959,14 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
+		/* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+		if ((form->puballtables || form->puballsequences) &&
+			!superuser_arg(newOwnerId))
 			ereport(ERROR,
 					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 					 errmsg("permission denied to change owner of publication \"%s\"",
 							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+					 errhint("The owner of a FOR ALL TABLES and/or a FOR ALL SEQUENCES publication must be a superuser.")));
 
 		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
 			ereport(ERROR,
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index b4c1e2c69d..f2495a14d8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -258,6 +262,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -441,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -577,6 +582,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10553,7 +10559,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10573,13 +10584,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10691,6 +10702,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19490,6 +19523,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8f73a5df95..34505ae883 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4285,6 +4285,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4315,9 +4316,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBufferStr(query, "false AS pubgencols ");
+		appendPQExpBufferStr(query, "false AS pubgencols, false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4333,6 +4334,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4353,6 +4355,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4404,8 +4408,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index f62b564ed1..31ed3bbdd2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -633,6 +633,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index bf65d44b94..9e11f2bd1a 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2996,6 +2996,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index d5543fd62b..d471097549 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1737,28 +1737,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1767,22 +1758,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1792,6 +1776,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1823,32 +1860,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2071,6 +2129,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6318,7 +6382,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6335,13 +6399,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6447,6 +6518,19 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
+	int			puboid_col = -1,	/* column indexes in "res" */
+				pubname_col = -1,
+				pubowner_col = -1,
+				puballtables_col = -1,
+				puballsequences_col = -1,
+				pubins_col = -1,
+				pubupd_col = -1,
+				pubdel_col = -1,
+				pubtrunc_col = -1,
+				pubgen_col = -1,
+				pubviaroot_col = -1;
+	int			cols = 0;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6464,22 +6548,52 @@ describePublications(const char *pattern)
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+	puboid_col = cols++;
+	pubname_col = cols++;
+	pubowner_col = cols++;
+	puballtables_col = cols++;
+
+	if (has_pubsequence)
+	{
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+		puballsequences_col = cols++;
+	}
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+	pubins_col = cols++;
+	pubupd_col = cols++;
+	pubdel_col = cols++;
+
 	if (has_pubtruncate)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
+		pubtrunc_col = cols++;
+	}
+
 	if (has_pubgencols)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubgencols");
+		pubgen_col = cols++;
+	}
+
 	if (has_pubviaroot)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+		pubviaroot_col = cols++;
+	}
 
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
@@ -6521,26 +6635,25 @@ describePublications(const char *pattern)
 	for (i = 0; i < PQntuples(res); i++)
 	{
 		const char	align = 'l';
-		int			ncols = 5;
 		int			nrows = 1;
-		char	   *pubid = PQgetvalue(res, i, 0);
-		char	   *pubname = PQgetvalue(res, i, 1);
-		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+		char	   *pubid = PQgetvalue(res, i, puboid_col);
+		char	   *pubname = PQgetvalue(res, i, pubname_col);
+		bool		puballtables = strcmp(PQgetvalue(res, i, puballtables_col), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
-		if (has_pubtruncate)
-			ncols++;
-		if (has_pubgencols)
-			ncols++;
-		if (has_pubviaroot)
-			ncols++;
-
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
-		printTableInit(&cont, &myopt, title.data, ncols, nrows);
+
+		/*
+		 * The table will be initialized with (cols - 2) columns excluding
+		 * 'pubid' and 'pubname'.
+		 */
+		printTableInit(&cont, &myopt, title.data, cols - 2, nrows);
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6551,17 +6664,19 @@ describePublications(const char *pattern)
 		if (has_pubviaroot)
 			printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
 
-		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubowner_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, puballtables_col), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, puballsequences_col), false, false);	/* all sequences */
+		printTableAddCell(&cont, PQgetvalue(res, i, pubins_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubupd_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubdel_col), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubtrunc_col), false, false);
 		if (has_pubgencols)
-			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubgen_col), false, false);
 		if (has_pubviaroot)
-			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubviaroot_col), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 81cbf10aa2..84ab8e4576 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3491,12 +3491,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 30c0574e85..403db804d8 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -112,6 +118,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	bool		pubgencols;
 	PublicationActions pubactions;
@@ -147,6 +154,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 38d6ad7dcb..f5fa3eee62 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4185,6 +4185,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4192,6 +4208,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 36dc31c16c..76c38b2e0f 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6350,9 +6350,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index c48f11f293..3ad1d6e9e8 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = 'true', publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = 'foo');
 ERROR:  publish_generated_columns requires a Boolean value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f                 | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f                 | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f                 | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -764,10 +836,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f                 | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f                 | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -957,10 +1029,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1168,10 +1240,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1211,10 +1283,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1294,10 +1366,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1307,20 +1379,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f                 | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- adding schemas and tables
@@ -1336,19 +1408,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1362,44 +1434,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1433,10 +1505,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1444,20 +1516,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1465,10 +1537,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1477,10 +1549,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1489,10 +1561,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1500,10 +1572,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1511,10 +1583,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1522,29 +1594,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1553,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1565,10 +1637,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1647,18 +1719,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1668,20 +1740,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1801,18 +1873,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=1);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | t                 | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns=0);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1823,50 +1895,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'=false
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns=false);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'=true
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns=true);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publication_generate_columns'=false
 ALTER PUBLICATION pub2 SET (publish_generated_columns = false);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c4c21a95d0..4d3998eac5 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e1c4f913f8..cb8e04660b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2263,6 +2263,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250204-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250204-0005-Documentation-for-sequence-synchronization.patchDownload
From 5020e8eff38f2727706d466979a1661eef535315 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20250204 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  13 +-
 doc/src/sgml/logical-replication.sgml     | 229 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 360 insertions(+), 34 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cc6cf9bef0..e8e1e57255 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8108,16 +8108,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8151,7 +8154,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fbdd6ce574..bec3cfe2f9 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5094,7 +5094,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        </para>
        <para>
         In logical replication, this parameter also limits how often a failing
-        replication apply worker will be respawned.
+        replication apply worker and sequence synchronization worker will be
+        respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5241,8 +5242,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5265,10 +5266,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 8290cd1a08..dfa2e9dc9f 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -111,7 +111,11 @@
    accessed.  Each table can be added to multiple publications if needed.
    Publications may currently only contain tables and all tables in schema.
    Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   <literal>ALL TABLES</literal>. Publications can include sequences as well,
+   but their behavior differs from that of tables or groups of tables. Unlike
+   tables, sequences allow users to synchronize their current state at any
+   given time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1570,6 +1574,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   to synchronize the sequences after executing any of the above subscriber
+   commands, and will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1876,16 +2075,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2189,8 +2390,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2204,7 +2405,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d0d176cc54..3d841658cf 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..d40fc79e7b 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 6cf7d4f9a1..212ee8c16d 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a586156614..de82964f6c 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250204-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250204-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 023d7d1c743777b27fbdff697f5daa32690013d7 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20250204 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 191 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 234 insertions(+), 189 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413..1c71161e72 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 268b2675ca..b30bcaaa80 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79..9283e996ef 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..b8124681ce
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,191 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 6af5c9fe16..cfe638ae6a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +136,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1327,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1568,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1724,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1742,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 334bf3e7af..70ce1dd067 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4778,7 +4778,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 75d6fdf195..a883a03c34 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952..d816866f16 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index cb8e04660b..a03670f264 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2807,7 +2807,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250204-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250204-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 7f9b0ef2caac8f2405a2208a6417e5bfe1cf8d0c Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 23 Dec 2024 12:22:05 +0530
Subject: [PATCH v20250204 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Log a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  27 +-
 src/backend/commands/subscriptioncmds.c       | 323 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 654 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   6 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/035_sequences.pl      | 215 ++++++
 30 files changed, 1509 insertions(+), 177 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/035_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 4928a3417d..73f6420c6d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1336,3 +1336,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e72..68b55bb5ea 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 7a595c84db..4fa5cddff7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 773f4a182e..c3b5467ec2 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 2d8a71ca1e..18bbdec143 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +919,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +948,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +973,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +990,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1008,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1573,33 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1611,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1624,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1894,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2086,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * Check and log a warning if the publisher has subscribed to the same table
  * from some other publisher. This check is required only if "copy_data = true"
  * and "origin = none" for CREATE SUBSCRIPTION and
- * ALTER SUBSCRIPTION ... REFRESH statements to notify the user that data
- * having origin might have been copied.
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to notify the user
+ * that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2124,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2304,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index e3e4e41ac3..c466306474 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -871,7 +871,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index f2495a14d8..80609b346c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10833,11 +10833,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b288915cec..09a75a310d 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index a3c7adbf1a..5f44822cdf 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -480,7 +481,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -488,6 +489,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1348,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef..a2268d8361 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..196d29e516
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,654 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch == false)
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+	/* Return the LSN when the sequence state was set. */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+				sequence_sync_error = true;
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Setting
+	 * the failure time to prevent repeated initiation of the sequencesync
+	 * worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index b8124681ce..3c6ffba6f3 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -51,8 +51,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -67,15 +69,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -90,7 +101,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -110,7 +123,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -122,17 +147,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -145,16 +175,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -162,7 +195,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +224,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index cfe638ae6a..810f38d5f9 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 70ce1dd067..d4f11b24ca 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4629,8 +4655,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4709,6 +4735,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4728,14 +4758,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4780,6 +4813,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 22f16a3b46..914035c919 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3258,7 +3258,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 56b6c368ac..5c5a775d40 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 34505ae883..377aa0d088 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5044,12 +5044,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 31ed3bbdd2..0dd6e60c2b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -790,6 +790,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 84ab8e4576..26e131c84f 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2269,7 +2269,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index fdad30dfab..e38d2f5d65 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12130,6 +12130,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a883a03c34..6e26991b0c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683..26e3c9096a 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index f5fa3eee62..f938f5fb18 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,7 +4253,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index e62abfd814..9851f02dd3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index d816866f16..5ee21eab9a 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,29 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 3014d047fe..81ecbb989e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d929..66dcd71eef 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index d40b49714f..fbfa023740 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -41,6 +41,7 @@ tests += {
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
+      't/035_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/035_sequences.pl b/src/test/subscription/t/035_sequences.pl
new file mode 100644
index 0000000000..94466a4f83
--- /dev/null
+++ b/src/test/subscription/t/035_sequences.pl
@@ -0,0 +1,215 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

#189vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#187)
Re: Logical Replication of sequences

On Fri, 3 Jan 2025 at 09:07, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Some minor review comments for the patch v20241230-0003.

======
src/backend/replication/logical/syncutils.c

1.
+ * syncutils.c
+ *   PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2024, PostgreSQL Global Development Group

Happy New Year.

s/2024/2025/

Modified

~~~

2.
+/*
+ * Enum representing the overall state of subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that subscription relation state is
+ * up-to-date and valid.
+ */

2a.
That first sentence saying "overall state of [...] state" is a bit strange.

Maybe it can be reworded something like:
Enum for phases of the subscription relations state.

Modified

~

2b.
/is no longer valid and/is no longer valid, and/

Modified

2c.
/that subscription relation state is up-to-date/that the subscription
relation state is up-to-date/

Modified

The changes for the same are available at the v20250204 version patch
attached at [1]/messages/by-id/CALDaNm07EtT7zQXhjvaX7AKUv_gKMsrSYxJQmmOHhpCZpvV07w@mail.gmail.com.

[1]: /messages/by-id/CALDaNm07EtT7zQXhjvaX7AKUv_gKMsrSYxJQmmOHhpCZpvV07w@mail.gmail.com

Regards,
Vignesh

#190Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#188)
Re: Logical Replication of sequences

Hi Vignesh,

FYI, looks like your attached patchset was misnamed 20250204 instead
of 20250104. Anyway, it does not affect the reviews, but I am going to
refer to it as 0104 from now.

~~~

I have no comments for the patch v20250104-0001.

Some comments for the patch v20250104-0002.

======
doc/src/sgml/ref/create_publication.sgml

1.
<phrase>where <replaceable
class="parameter">publication_object</replaceable> is one of:</phrase>

+ ALL TABLES
+ ALL SEQUENCES
TABLE [ ONLY ] <replaceable
class="parameter">table_name</replaceable> [ * ] [ ( <replaceable
class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE (
<replaceable class="parameter">expression</replaceable> ) ] [, ... ]
TABLES IN SCHEMA { <replaceable
class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ...
]

I'm wondering if it would be better to reorder these in the synopsis as:
TABLE...
TABLES IN SCHEMA...
ALL TABLES
ALL SEQUENCES

Then it will match the same order as the parameters section.

======
src/backend/commands/publicationcmds.c

2.

CreatePublication:

2.
- /* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ /* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+ if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- errmsg("must be superuser to create FOR ALL TABLES publication")));
+ errmsg("must be superuser to create ALL TABLES and/or ALL SEQUENCES
publication")));

2b.
This message might be OK now, but I suspect it will become very messy
in future after you introduce another syntax like "FOR SEQUENCE
seq_name" etc (which would also be able to be used in combination with
a FOR ALL TABLES).

So, I think that for future-proofing against all the possible (future)
combinations, and for keeping the code cleaner, it will be far simpler
to just keep the errors for tables and sequences separated:

SUGGESTION:
if (!superuser())
{
if (stmt->for_all_tables)
ereport(ERROR, ... FOR ALL TABLES ...);
if (stmt->for_all_sequences)
ereport(ERROR, ... FOR ALL SEQUENCES ...);
}

If we do that way it will not print both the stmt publication type if
both "ALL TABLES" and "ALL SEQUENCES" is specified.

Yes, I know, but AFAICT you're going to encounter this same kind of
problem anyway with all the other combinations, where we only give an
error for the first thing it finds wrong.

For example,
CREATE FOR ALL SEQUENCES, TABLES IN SCHEMA s1;

That's going to report "must be superuser to create a FOR ALL TABLES
and/or a FOR ALL SEQUENCES publication", but it's not going to say
"must be superuser to create FOR TABLES IN SCHEMA publication".

So, my point was, I guess we are not going to make error messages for
every possible combination, so why are we making a special case by
combining only the message for ALL TABLES and ALL SEQUENCES?

======
src/bin/psql/describe.c

3.
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, puballsequences_col),
false, false); /* all sequences */

The comment ("/* all sequences */") doesn't seem necessary given the
self-explanatory variable name. Also, none of the similar nearby code
has comments like this.

======
src/test/regress/expected/publication.out

4.
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL
SEQUENCES, ALL TABLES;

Do you think you should test this both ways?
e.g.1. FOR ALL SEQUENCES, ALL TABLES
e.g.2. FOR ALL TABLES, ALL SEQUENCES

~~~

5.
+DROP PUBLICATION regress_pub_forallsequences1,
regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL
SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  TABLES can be specified only once.

Should the DETAIL message say ALL TABLES instead of just TABLES?

~~~

6.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL
SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.

Should the DETAIL message say ALL SEQUENCES instead of just SEQUENCES?

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#191vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#190)
5 attachment(s)
Re: Logical Replication of sequences

On Mon, 6 Jan 2025 at 10:46, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

FYI, looks like your attached patchset was misnamed 20250204 instead
of 20250104. Anyway, it does not affect the reviews, but I am going to
refer to it as 0104 from now.

~~~

I have no comments for the patch v20250104-0001.

Some comments for the patch v20250104-0002.

======
doc/src/sgml/ref/create_publication.sgml

1.
<phrase>where <replaceable
class="parameter">publication_object</replaceable> is one of:</phrase>

+ ALL TABLES
+ ALL SEQUENCES
TABLE [ ONLY ] <replaceable
class="parameter">table_name</replaceable> [ * ] [ ( <replaceable
class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE (
<replaceable class="parameter">expression</replaceable> ) ] [, ... ]
TABLES IN SCHEMA { <replaceable
class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ...
]

I'm wondering if it would be better to reorder these in the synopsis as:
TABLE...
TABLES IN SCHEMA...
ALL TABLES
ALL SEQUENCES

Then it will match the same order as the parameters section.

Modified

======
src/backend/commands/publicationcmds.c

2.

CreatePublication:

2.
- /* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ /* FOR ALL TABLES or FOR ALL SEQUENCES requires superuser */
+ if ((stmt->for_all_tables || stmt->for_all_sequences) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- errmsg("must be superuser to create FOR ALL TABLES publication")));
+ errmsg("must be superuser to create ALL TABLES and/or ALL SEQUENCES
publication")));

2b.
This message might be OK now, but I suspect it will become very messy
in future after you introduce another syntax like "FOR SEQUENCE
seq_name" etc (which would also be able to be used in combination with
a FOR ALL TABLES).

So, I think that for future-proofing against all the possible (future)
combinations, and for keeping the code cleaner, it will be far simpler
to just keep the errors for tables and sequences separated:

SUGGESTION:
if (!superuser())
{
if (stmt->for_all_tables)
ereport(ERROR, ... FOR ALL TABLES ...);
if (stmt->for_all_sequences)
ereport(ERROR, ... FOR ALL SEQUENCES ...);
}

If we do that way it will not print both the stmt publication type if
both "ALL TABLES" and "ALL SEQUENCES" is specified.

Yes, I know, but AFAICT you're going to encounter this same kind of
problem anyway with all the other combinations, where we only give an
error for the first thing it finds wrong.

For example,
CREATE FOR ALL SEQUENCES, TABLES IN SCHEMA s1;

That's going to report "must be superuser to create a FOR ALL TABLES
and/or a FOR ALL SEQUENCES publication", but it's not going to say
"must be superuser to create FOR TABLES IN SCHEMA publication".

So, my point was, I guess we are not going to make error messages for
every possible combination, so why are we making a special case by
combining only the message for ALL TABLES and ALL SEQUENCES?

Modified

======
src/bin/psql/describe.c

3.
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, puballsequences_col),
false, false); /* all sequences */

The comment ("/* all sequences */") doesn't seem necessary given the
self-explanatory variable name. Also, none of the similar nearby code
has comments like this.

Modified

======
src/test/regress/expected/publication.out

4.
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL
SEQUENCES, ALL TABLES;

Do you think you should test this both ways?
e.g.1. FOR ALL SEQUENCES, ALL TABLES
e.g.2. FOR ALL TABLES, ALL SEQUENCES

I feel it is not required

~~~

5.
+DROP PUBLICATION regress_pub_forallsequences1,
regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL
SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  TABLES can be specified only once.

Should the DETAIL message say ALL TABLES instead of just TABLES?

Modified

~~~

6.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL
SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  SEQUENCES can be specified only once.

Should the DETAIL message say ALL SEQUENCES instead of just SEQUENCES?

Modified

The attached v20250110 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250110-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250110-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 6ca16d85a0700f48d4e1bbad5af654d4cbf781f5 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250110 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 47370e581a..9bb896e9a5 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19690,6 +19690,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index b13ee2b745..773f4a182e 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b37e8a6f88..fdad30dfab 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3375,6 +3375,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.43.0

v20250110-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250110-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 3cde7d289c7754ad16ba16d624419798461cdcbc Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20250110 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 191 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 234 insertions(+), 189 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413..1c71161e72 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 268b2675ca..b30bcaaa80 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79..9283e996ef 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..b8124681ce
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,191 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 6af5c9fe16..cfe638ae6a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +136,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1327,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1568,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1724,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1742,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 334bf3e7af..70ce1dd067 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4778,7 +4778,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 75d6fdf195..a883a03c34 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952..d816866f16 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index caf02d5e5d..9202a1cbe8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2808,7 +2808,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250110-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250110-0005-Documentation-for-sequence-synchronization.patchDownload
From 2b5c759fabf838b9b650cf1617600b7f1acb624e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 8 Aug 2024 20:27:26 +0530
Subject: [PATCH v20250110 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  13 +-
 doc/src/sgml/logical-replication.sgml     | 229 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 360 insertions(+), 34 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cc6cf9bef0..e8e1e57255 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8108,16 +8108,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8151,7 +8154,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index f1ab614575..67aadf3d89 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5096,7 +5096,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        </para>
        <para>
         In logical replication, this parameter also limits how often a failing
-        replication apply worker will be respawned.
+        replication apply worker and sequence synchronization worker will be
+        respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5243,8 +5244,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5267,10 +5268,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 8290cd1a08..dfa2e9dc9f 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -111,7 +111,11 @@
    accessed.  Each table can be added to multiple publications if needed.
    Publications may currently only contain tables and all tables in schema.
    Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   <literal>ALL TABLES</literal>. Publications can include sequences as well,
+   but their behavior differs from that of tables or groups of tables. Unlike
+   tables, sequences allow users to synchronize their current state at any
+   given time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1570,6 +1574,201 @@ test_sub=# SELECT * FROM t1 ORDER BY id;
 
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   to synchronize the sequences after executing any of the above subscriber
+   commands, and will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -1876,16 +2075,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2189,8 +2390,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2204,7 +2405,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d0d176cc54..3d841658cf 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..d40fc79e7b 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 6cf7d4f9a1..212ee8c16d 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a586156614..de82964f6c 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250110-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250110-0004-Enhance-sequence-synchronization-during-su.patchDownload
From c58059fb6d75b9e1abbef7b2cf4d7e8e735f931a Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 23 Dec 2024 12:22:05 +0530
Subject: [PATCH v20250110 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Log a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  27 +-
 src/backend/commands/subscriptioncmds.c       | 323 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 654 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   6 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/035_sequences.pl      | 215 ++++++
 30 files changed, 1509 insertions(+), 177 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/035_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 4928a3417d..73f6420c6d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1336,3 +1336,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e72..68b55bb5ea 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 7a595c84db..4fa5cddff7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 773f4a182e..c3b5467ec2 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 2d8a71ca1e..18bbdec143 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +919,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +948,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +973,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +990,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1008,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1573,33 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1611,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1624,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1894,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2086,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * Check and log a warning if the publisher has subscribed to the same table
  * from some other publisher. This check is required only if "copy_data = true"
  * and "origin = none" for CREATE SUBSCRIPTION and
- * ALTER SUBSCRIPTION ... REFRESH statements to notify the user that data
- * having origin might have been copied.
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to notify the user
+ * that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2124,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2304,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index e3e4e41ac3..c466306474 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -871,7 +871,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dbbc66ffcf..5e30bd0476 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10833,11 +10833,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b288915cec..09a75a310d 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index a3c7adbf1a..5f44822cdf 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -480,7 +481,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -488,6 +489,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1348,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef..a2268d8361 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..196d29e516
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,654 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch == false)
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+	/* Return the LSN when the sequence state was set. */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+				sequence_sync_error = true;
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Setting
+	 * the failure time to prevent repeated initiation of the sequencesync
+	 * worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index b8124681ce..3c6ffba6f3 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -51,8 +51,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -67,15 +69,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -90,7 +101,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -110,7 +123,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -122,17 +147,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -145,16 +175,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -162,7 +195,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +224,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index cfe638ae6a..810f38d5f9 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 70ce1dd067..d4f11b24ca 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4629,8 +4655,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4709,6 +4735,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4728,14 +4758,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4780,6 +4813,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index c9d8cd796a..d6647a8030 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3258,7 +3258,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 56b6c368ac..5c5a775d40 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 34505ae883..377aa0d088 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5044,12 +5044,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 31ed3bbdd2..0dd6e60c2b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -790,6 +790,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 84ab8e4576..26e131c84f 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2269,7 +2269,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index fdad30dfab..e38d2f5d65 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12130,6 +12130,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a883a03c34..6e26991b0c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683..26e3c9096a 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index f5fa3eee62..f938f5fb18 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,7 +4253,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index e62abfd814..9851f02dd3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index d816866f16..5ee21eab9a 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,29 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 3014d047fe..81ecbb989e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d929..66dcd71eef 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index d40b49714f..fbfa023740 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -41,6 +41,7 @@ tests += {
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
+      't/035_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/035_sequences.pl b/src/test/subscription/t/035_sequences.pl
new file mode 100644
index 0000000000..94466a4f83
--- /dev/null
+++ b/src/test/subscription/t/035_sequences.pl
@@ -0,0 +1,215 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

v20250110-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250110-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From cf169679a21783fa5b292bd84132a7851f49873e Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Sun, 8 Dec 2024 13:37:31 +0000
Subject: [PATCH v20250110 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 265 ++++++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 548 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 803 insertions(+), 365 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 5e25536554..2126c9538e 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -118,16 +119,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +150,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -261,10 +272,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -280,8 +291,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR TABLES IN SCHEMA</command>,
+   <command>FOR ALL TABLES</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -431,6 +443,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b89098f5e9..4928a3417d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1041,6 +1042,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1063,6 +1100,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 35747b3df5..f87cafafe4 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -786,11 +786,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -824,6 +830,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1957,19 +1965,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index b4c1e2c69d..dbbc66ffcf 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -258,6 +262,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -441,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <node>	opt_routine_body
 %type <groupclause> group_clause
@@ -577,6 +582,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10553,7 +10559,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10573,13 +10584,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10691,6 +10702,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19490,6 +19523,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8f73a5df95..34505ae883 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4285,6 +4285,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4315,9 +4316,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBufferStr(query, "false AS pubgencols ");
+		appendPQExpBufferStr(query, "false AS pubgencols, false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4333,6 +4334,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4353,6 +4355,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4404,8 +4408,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index f62b564ed1..31ed3bbdd2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -633,6 +633,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index bf65d44b94..9e11f2bd1a 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2996,6 +2996,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index d5543fd62b..7b6fa15d39 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1737,28 +1737,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1767,22 +1758,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1792,6 +1776,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1823,32 +1860,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2071,6 +2129,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6318,7 +6382,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6335,13 +6399,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6447,6 +6518,19 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
+	int			puboid_col = -1,	/* column indexes in "res" */
+				pubname_col = -1,
+				pubowner_col = -1,
+				puballtables_col = -1,
+				puballsequences_col = -1,
+				pubins_col = -1,
+				pubupd_col = -1,
+				pubdel_col = -1,
+				pubtrunc_col = -1,
+				pubgen_col = -1,
+				pubviaroot_col = -1;
+	int			cols = 0;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6464,22 +6548,52 @@ describePublications(const char *pattern)
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
+	has_pubsequence = (pset.sversion >= 180000);
 
 	initPQExpBuffer(&buf);
 
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+	puboid_col = cols++;
+	pubname_col = cols++;
+	pubowner_col = cols++;
+	puballtables_col = cols++;
+
+	if (has_pubsequence)
+	{
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+		puballsequences_col = cols++;
+	}
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+	pubins_col = cols++;
+	pubupd_col = cols++;
+	pubdel_col = cols++;
+
 	if (has_pubtruncate)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
+		pubtrunc_col = cols++;
+	}
+
 	if (has_pubgencols)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubgencols");
+		pubgen_col = cols++;
+	}
+
 	if (has_pubviaroot)
+	{
 		appendPQExpBufferStr(&buf,
 							 ", pubviaroot");
+		pubviaroot_col = cols++;
+	}
 
 	appendPQExpBufferStr(&buf,
 						 "\nFROM pg_catalog.pg_publication\n");
@@ -6521,26 +6635,25 @@ describePublications(const char *pattern)
 	for (i = 0; i < PQntuples(res); i++)
 	{
 		const char	align = 'l';
-		int			ncols = 5;
 		int			nrows = 1;
-		char	   *pubid = PQgetvalue(res, i, 0);
-		char	   *pubname = PQgetvalue(res, i, 1);
-		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+		char	   *pubid = PQgetvalue(res, i, puboid_col);
+		char	   *pubname = PQgetvalue(res, i, pubname_col);
+		bool		puballtables = strcmp(PQgetvalue(res, i, puballtables_col), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
-		if (has_pubtruncate)
-			ncols++;
-		if (has_pubgencols)
-			ncols++;
-		if (has_pubviaroot)
-			ncols++;
-
 		initPQExpBuffer(&title);
 		printfPQExpBuffer(&title, _("Publication %s"), pubname);
-		printTableInit(&cont, &myopt, title.data, ncols, nrows);
+
+		/*
+		 * The table will be initialized with (cols - 2) columns excluding
+		 * 'pubid' and 'pubname'.
+		 */
+		printTableInit(&cont, &myopt, title.data, cols - 2, nrows);
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6551,17 +6664,19 @@ describePublications(const char *pattern)
 		if (has_pubviaroot)
 			printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
 
-		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubowner_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, puballtables_col), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, puballsequences_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubins_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubupd_col), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, pubdel_col), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubtrunc_col), false, false);
 		if (has_pubgencols)
-			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubgen_col), false, false);
 		if (has_pubviaroot)
-			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+			printTableAddCell(&cont, PQgetvalue(res, i, pubviaroot_col), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 81cbf10aa2..84ab8e4576 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3491,12 +3491,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 30c0574e85..403db804d8 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -112,6 +118,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	bool		pubgencols;
 	PublicationActions pubactions;
@@ -147,6 +154,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 38d6ad7dcb..f5fa3eee62 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4185,6 +4185,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4192,6 +4208,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 36dc31c16c..76c38b2e0f 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6350,9 +6350,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index c48f11f293..3873d3a72e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = 'true', publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = 'foo');
 ERROR:  publish_generated_columns requires a Boolean value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | f                 | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpib_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | f                 | f
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpib_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | f                 | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | f                 | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | f                 | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | f                 | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | f                 | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -764,10 +836,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | f                 | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | f                 | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -957,10 +1029,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1168,10 +1240,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1211,10 +1283,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1294,10 +1366,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1307,20 +1379,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | f                 | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | f                 | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | f                 | f
 (1 row)
 
 -- adding schemas and tables
@@ -1336,19 +1408,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1362,44 +1434,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1433,10 +1505,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1444,20 +1516,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1465,10 +1537,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1477,10 +1549,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1489,10 +1561,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1500,10 +1572,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1511,10 +1583,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1522,29 +1594,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1553,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1565,10 +1637,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1647,18 +1719,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables from schemas:
     "pub_test1"
 
@@ -1668,20 +1740,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1801,18 +1873,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=1);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | t                 | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns=0);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | f                 | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1823,50 +1895,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'=false
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns=false);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'=true
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns=true);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | t                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | t                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publication_generate_columns'=false
 ALTER PUBLICATION pub2 SET (publish_generated_columns = false);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'=false
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | f                 | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | f                 | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c4c21a95d0..4d3998eac5 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index eb93debe10..caf02d5e5d 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2264,6 +2264,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

#192vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#191)
5 attachment(s)
Re: Logical Replication of sequences

On Fri, 10 Jan 2025 at 20:07, vignesh C <vignesh21@gmail.com> wrote:

The attached v20250110 version patch has the changes for the same.

The patch was not applying on top of HEAD because of recent commits,
here is a rebased version.

Regards,
Vignesh

Attachments:

v20250203-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250203-0005-Documentation-for-sequence-synchronization.patchDownload
From 6bad683dea8bf538562fdb2192d0cf77e5bc4e24 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250203 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  14 +-
 doc/src/sgml/logical-replication.sgml     | 229 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 360 insertions(+), 35 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 088fb175cc..8932a205f9 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8132,16 +8132,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8175,7 +8178,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a782f10998..c5e438300a 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4953,8 +4953,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        </para>
        <para>
         In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        replication apply worker or table synchronization worker or sequence
+        synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5101,8 +5101,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5125,10 +5125,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 613abcd28b..9f94f76177 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -111,7 +111,11 @@
    accessed.  Each table can be added to multiple publications if needed.
    Publications may currently only contain tables and all tables in schema.
    Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   <literal>ALL TABLES</literal>. Publications can include sequences as well,
+   but their behavior differs from that of tables or groups of tables. Unlike
+   tables, sequences allow users to synchronize their current state at any
+   given time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,201 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   to synchronize the sequences after executing any of the above subscriber
+   commands, and will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2092,16 +2291,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2416,8 +2617,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2431,7 +2632,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4e917f159a..bdf0237ae4 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2017,8 +2017,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007..d40fc79e7b 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 6cf7d4f9a1..212ee8c16d 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 8e2b0a7927..31a98defe5 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250203-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250203-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 9b020f9c8ec390708ae5d9a9cb32942bd3ece85f Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250203 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 7efc81936a..b0934332ca 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19741,6 +19741,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index b13ee2b745..773f4a182e 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 5b8c2ad2a5..a92c4d46bb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3375,6 +3375,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8..c2d6c78827 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b6074..46054527df 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.43.0

v20250203-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250203-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 6fa962577c5d13323d466bcf54fc7151c0d304ec Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 23 Dec 2024 12:22:05 +0530
Subject: [PATCH v20250203 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Log a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  27 +-
 src/backend/commands/subscriptioncmds.c       | 323 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 654 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   6 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/035_sequences.pl      | 215 ++++++
 30 files changed, 1509 insertions(+), 177 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/035_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 35e14e8705..e8d5dc5226 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1362,3 +1362,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e72..68b55bb5ea 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index cddc3ea9b5..9add5ad7f6 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 773f4a182e..c3b5467ec2 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 2d8a71ca1e..18bbdec143 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +919,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +948,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +973,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +990,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1008,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1573,33 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1611,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1624,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1894,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2086,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * Check and log a warning if the publisher has subscribed to the same table
  * from some other publisher. This check is required only if "copy_data = true"
  * and "origin = none" for CREATE SUBSCRIPTION and
- * ALTER SUBSCRIPTION ... REFRESH statements to notify the user that data
- * having origin might have been copied.
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to notify the user
+ * that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2124,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2304,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 2dac4bd363..130c80f9c2 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -871,7 +871,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 185b601667..e1f2ee12e3 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10864,11 +10864,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index b288915cec..09a75a310d 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c6752..c719af1f8a 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index a3c7adbf1a..5f44822cdf 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -480,7 +481,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -488,6 +489,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1348,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef..a2268d8361 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 0000000000..196d29e516
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,654 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch == false)
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+	/* Return the LSN when the sequence state was set. */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+				sequence_sync_error = true;
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Setting
+	 * the failure time to prevent repeated initiation of the sequencesync
+	 * worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index b8124681ce..3c6ffba6f3 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -51,8 +51,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -67,15 +69,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -90,7 +101,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -110,7 +123,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -122,17 +147,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -145,16 +175,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -162,7 +195,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +224,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index cfe638ae6a..810f38d5f9 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index be81e2a7a6..77e6c74a6a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1026,7 +1031,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1148,7 +1156,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1204,7 +1215,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1270,7 +1284,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1405,7 +1422,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2247,7 +2267,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3716,7 +3739,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4629,8 +4655,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4709,6 +4735,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4728,14 +4758,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4780,6 +4813,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 71448bb4fd..44741cfd81 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3259,7 +3259,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 56b6c368ac..5c5a775d40 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 006e8bf3bb..e72d9ca06f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5045,12 +5045,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 8f6b603724..14724099d1 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -791,6 +791,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 84ab8e4576..26e131c84f 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2269,7 +2269,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index a92c4d46bb..9abce4dbd3 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12155,6 +12155,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d8..0c706bd9cd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683..26e3c9096a 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 5b98c88e86..8642f3d316 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4300,7 +4300,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index e62abfd814..9851f02dd3 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index d816866f16..5ee21eab9a 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,29 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 3361f6a69c..2771568600 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d929..66dcd71eef 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index d40b49714f..fbfa023740 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -41,6 +41,7 @@ tests += {
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
+      't/035_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/035_sequences.pl b/src/test/subscription/t/035_sequences.pl
new file mode 100644
index 0000000000..94466a4f83
--- /dev/null
+++ b/src/test/subscription/t/035_sequences.pl
@@ -0,0 +1,215 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

v20250203-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250203-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 16d01a4c1e2c52e956f243a6a778b968bf7016e5 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250203 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 208 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 766 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 73f0c8d89f..52c6095eb5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -118,16 +119,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +150,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -277,10 +288,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -296,8 +307,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR TABLES IN SCHEMA</command>,
+   <command>FOR ALL TABLES</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -447,6 +459,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 41ffd494c8..35e14e8705 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1053,6 +1054,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1075,6 +1112,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 951ffabb65..7a36d71a4d 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -788,11 +788,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -826,6 +832,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1959,19 +1967,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index d7f9c00c40..185b601667 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10584,7 +10590,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10604,13 +10615,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10722,6 +10733,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19597,6 +19630,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 02e1fdf8f7..006e8bf3bb 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4286,6 +4286,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4316,9 +4317,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4334,6 +4335,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4354,6 +4356,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4405,8 +4409,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7139c88a69..8f6b603724 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -634,6 +634,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 805ba9f49f..147e5356f5 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3061,6 +3061,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index aa4363b200..a8acc0c8f8 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1753,28 +1753,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1783,22 +1774,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1808,6 +1792,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1839,32 +1876,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2087,6 +2145,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6340,7 +6404,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6357,13 +6421,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6474,6 +6545,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6488,6 +6560,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6497,7 +6570,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6572,6 +6656,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6585,6 +6671,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6597,15 +6685,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 81cbf10aa2..84ab8e4576 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3491,12 +3491,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a861..283c0b1119 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index ffe155ee20..5b98c88e86 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4232,6 +4232,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4239,6 +4255,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index e6f7b9013d..a87f78a196 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6435,9 +6435,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index bc3898fbe5..30692ea15e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -764,10 +836,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -957,10 +1029,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1168,10 +1240,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1211,10 +1283,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1294,10 +1366,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1307,20 +1379,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1336,19 +1408,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1362,44 +1434,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1433,10 +1505,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1444,20 +1516,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1465,10 +1537,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1477,10 +1549,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1489,10 +1561,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1500,10 +1572,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1511,10 +1583,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1522,29 +1594,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1553,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1565,10 +1637,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1647,18 +1719,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1668,20 +1740,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1803,26 +1875,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1834,50 +1906,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 47f0329c24..a86a389af5 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9a3bee93de..7c7d7e9ac2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2268,6 +2268,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250203-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250203-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From ad96d423b5d1a854ea23b02cba903a7d77ff0a58 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20250203 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 191 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 234 insertions(+), 189 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413..1c71161e72 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4e..c62c8c6752 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 268b2675ca..b30bcaaa80 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79..9283e996ef 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 0000000000..b8124681ce
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,191 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 6af5c9fe16..cfe638ae6a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +136,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1327,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1568,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1724,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1742,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6966037d2e..be81e2a7a6 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1027,7 +1027,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1149,7 +1149,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1205,7 +1205,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1271,7 +1271,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1406,7 +1406,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2248,7 +2248,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3717,7 +3717,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4778,7 +4778,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869..ea869588d8 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952..d816866f16 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 7c7d7e9ac2..e7b89b61db 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2817,7 +2817,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

#193vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#192)
5 attachment(s)
Re: Logical Replication of sequences

On Mon, 3 Feb 2025 at 11:13, vignesh C <vignesh21@gmail.com> wrote:

On Fri, 10 Jan 2025 at 20:07, vignesh C <vignesh21@gmail.com> wrote:

The attached v20250110 version patch has the changes for the same.

The patch was not applying on top of HEAD because of recent commits,
here is a rebased version.

The patch was not applying on top of HEAD because of recent commits,
here is a rebased version.

Regards,
Vignesh

Attachments:

v20250312-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250312-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 36e2e25b4ad21d2c4e36107478bc7361e1a4c6f0 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250312 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 51dd8ad6571..88e2cd95bc9 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19880,6 +19880,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 4b7c5113aab..ae6eafbe21a 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 42e427f8fe8..b590f0afea9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3400,6 +3400,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..c2d6c788271 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..46054527df1 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.43.0

v20250312-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250312-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 4a4786baaede74affcdc33116cf1b4998282714c Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250312 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 208 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 766 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 73f0c8d89fb..52c6095eb5e 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -118,16 +119,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +150,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -277,10 +288,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -296,8 +307,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR TABLES IN SCHEMA</command>,
+   <command>FOR ALL TABLES</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -447,6 +459,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 150a768d16f..2b25abadd9f 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -809,11 +809,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -847,6 +853,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -1980,19 +1988,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 271ae26cbaf..f5012ea27bd 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10619,7 +10625,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10639,13 +10650,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10757,6 +10768,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19634,6 +19667,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c371570501a..3de0b6bf0cd 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4329,6 +4329,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4359,9 +4360,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4377,6 +4378,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4397,6 +4399,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4448,8 +4452,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bbdb30b5f54..892b53c0184 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -656,6 +656,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index c7bffc1b045..9b323284aa0 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3105,6 +3105,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index e6cf468ac9e..a5b539bc58a 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1753,28 +1753,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1783,22 +1774,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1808,6 +1792,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1839,32 +1876,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2087,6 +2145,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6399,7 +6463,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6416,13 +6480,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6533,6 +6604,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6547,6 +6619,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6556,7 +6629,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6631,6 +6715,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6644,6 +6730,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6656,15 +6744,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 8432be641ac..a16809c7de8 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3520,12 +3520,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 23c9e3c5abf..bc8ad978369 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4246,6 +4246,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4253,6 +4269,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 6543e90de75..fe59384424e 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6441,9 +6441,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..9735203fe58 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..990ae8283e0 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index dfe2690bdd3..cc84e449836 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2288,6 +2288,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250312-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250312-0005-Documentation-for-sequence-synchronization.patchDownload
From 452083e4e9d2826f84fc20a4985d92b9adf6bf7a Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250312 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  14 +-
 doc/src/sgml/logical-replication.sgml     | 229 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 360 insertions(+), 35 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index fb050635551..65abc2c3e11 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index d2fa5f7d1a9..e92c4865710 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5025,8 +5025,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        </para>
        <para>
         In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        replication apply worker or table synchronization worker or sequence
+        synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5173,8 +5173,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5197,10 +5197,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 3d18e507bbc..be08f5bde2e 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -111,7 +111,11 @@
    accessed.  Each table can be added to multiple publications if needed.
    Publications may currently only contain tables and all tables in schema.
    Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   <literal>ALL TABLES</literal>. Publications can include sequences as well,
+   but their behavior differs from that of tables or groups of tables. Unlike
+   tables, sequences allow users to synchronize their current state at any
+   given time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,201 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   to synchronize the sequences after executing any of the above subscriber
+   commands, and will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2092,16 +2291,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2622,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2436,7 +2637,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index aaa6586d3a4..f5777ba7f64 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2023,8 +2023,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..d40fc79e7be 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..6877873fb82 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 3f5a306247e..a9bb3ae6e3d 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250312-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250312-0004-Enhance-sequence-synchronization-during-su.patchDownload
From f24d52a68c3c8b10e0ad68fa570810231a529044 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Wed, 12 Mar 2025 08:19:13 +0530
Subject: [PATCH v20250312 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Log a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  27 +-
 src/backend/commands/subscriptioncmds.c       | 323 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 654 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   6 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/035_sequences.pl      | 215 ++++++
 30 files changed, 1509 insertions(+), 177 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/035_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..d4a20c5da88 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index a4d2cfdcaf5..aacd4b0f320 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ae6eafbe21a..223bd2ac529 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..cd9ad6aa5bf 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +919,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +948,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +973,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +990,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1008,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1573,33 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1611,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1624,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1894,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2208,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2248,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2433,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 0a9b880d250..34a6cd6caca 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -839,7 +839,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index f5012ea27bd..0f4ed1b9e30 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10899,11 +10899,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index a3c7adbf1a8..5f44822cdf4 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -480,7 +481,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -488,6 +489,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1348,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..196d29e5165
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,654 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch == false)
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+	/* Return the LSN when the sequence state was set. */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+				sequence_sync_error = true;
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Setting
+	 * the failure time to prevent repeated initiation of the sequencesync
+	 * worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index b8124681ce7..3c6ffba6f36 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -51,8 +51,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  */
 void
 pg_attribute_noreturn()
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -67,15 +69,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -90,7 +101,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -110,7 +123,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -122,17 +147,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -145,16 +175,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -162,7 +195,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +224,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index cfe638ae6af..810f38d5f90 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 835727efa9d..64c644836fe 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1027,7 +1032,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1149,7 +1157,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1205,7 +1216,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1271,7 +1285,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1406,7 +1423,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2248,7 +2268,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3729,7 +3752,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4642,8 +4668,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4722,6 +4748,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4741,14 +4771,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4793,6 +4826,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index ad25cbb39c5..4e9fb275355 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3293,7 +3293,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 56b6c368acf..5c5a775d40d 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 3de0b6bf0cd..06dfe640571 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5076,12 +5076,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 892b53c0184..59a1dfe81be 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -813,6 +813,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index a16809c7de8..a618cdac943 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2287,7 +2287,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b590f0afea9..ba53f65eac4 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12187,6 +12187,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..0c706bd9cd5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index bc8ad978369..172b2b96500 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4314,7 +4314,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index d816866f160..5ee21eab9a9 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,29 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-extern void pg_attribute_noreturn() SyncFinishWorker(void);
+extern void pg_attribute_noreturn() SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 62f69ac20b2..2336740afc6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index d40b49714f6..fbfa0237407 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -41,6 +41,7 @@ tests += {
       't/032_subscribe_use_index.pl',
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
+      't/035_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/035_sequences.pl b/src/test/subscription/t/035_sequences.pl
new file mode 100644
index 00000000000..94466a4f83f
--- /dev/null
+++ b/src/test/subscription/t/035_sequences.pl
@@ -0,0 +1,215 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

v20250312-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250312-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 33c1f5430af89c9bdb75703e8cd7d3f10b30ba91 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 12 Aug 2024 14:43:11 +0530
Subject: [PATCH v20250312 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 191 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 189 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 234 insertions(+), 189 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..b8124681ce7
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,191 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+void
+pg_attribute_noreturn()
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 6af5c9fe16c..cfe638ae6af 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,59 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-static void
-pg_attribute_noreturn()
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,8 +136,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -274,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -291,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -430,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -568,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -660,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1327,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1568,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1724,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1742,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 31ab69ea13a..835727efa9d 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1028,7 +1028,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1150,7 +1150,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1206,7 +1206,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1272,7 +1272,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1407,7 +1407,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2249,7 +2249,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3730,7 +3730,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4791,7 +4791,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..d816866f160 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+extern void pg_attribute_noreturn() SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index cc84e449836..8861740d538 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2843,7 +2843,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

#194vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#193)
Re: Logical Replication of sequences

On Wed, 12 Mar 2025 at 09:14, vignesh C <vignesh21@gmail.com> wrote:

The patch was not applying on top of HEAD because of recent commits,
here is a rebased version.

I have moved this to the next CommitFest since it will not be
committed in the current release. This also allows reviewers to focus
on the remaining patches in the current CommitFest.

Regards,
Vignesh

#195vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#193)
5 attachment(s)
Re: Logical Replication of sequences

On Wed, 12 Mar 2025 at 09:14, vignesh C <vignesh21@gmail.com> wrote:

The patch was not applying on top of HEAD because of recent commits,
here is a rebased version.

The patch was not applying on top of HEAD because of recent commits,
here is a rebased version.

Regards,
Vignesh

Attachments:

v20250325-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250325-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From ea631809eee7b3d089c8708214e13d2e0b3dc3e8 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250325 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 188 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 233 insertions(+), 188 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..63174d0cdff
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 65b98aa905f..cfe638ae6af 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -179,8 +136,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index e3b2b144942..ce668a0ef54 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1028,7 +1028,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1150,7 +1150,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1206,7 +1206,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1272,7 +1272,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1407,7 +1407,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2249,7 +2249,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3726,7 +3726,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4787,7 +4787,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..a43a9b192bd 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+pg_noreturn extern void SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 4eb8dde4eed..a4f5bf477f0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2872,7 +2872,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250325-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250325-0005-Documentation-for-sequence-synchronization.patchDownload
From 8b2a87881706e22a3b5dfd6d989ec6bd4437da36 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250325 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  14 +-
 doc/src/sgml/logical-replication.sgml     | 229 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 360 insertions(+), 35 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index fb050635551..65abc2c3e11 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 69fc93dffc4..5fcc85e902b 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5116,8 +5116,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        </para>
        <para>
         In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        replication apply worker or table synchronization worker or sequence
+        synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5259,8 +5259,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5283,10 +5283,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..73d31d1f9b7 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -111,7 +111,11 @@
    accessed.  Each table can be added to multiple publications if needed.
    Publications may currently only contain tables and all tables in schema.
    Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   <literal>ALL TABLES</literal>. Publications can include sequences as well,
+   but their behavior differs from that of tables or groups of tables. Unlike
+   tables, sequences allow users to synchronize their current state at any
+   given time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,201 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   to synchronize the sequences after executing any of the above subscriber
+   commands, and will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2314,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2643,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2457,7 +2658,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 0960f5ba94a..5b15b813bad 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2023,8 +2023,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..d40fc79e7be 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..6877873fb82 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 3f5a306247e..a9bb3ae6e3d 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -126,6 +126,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2176,6 +2181,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250325-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250325-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 0953c26300d06768405342a3ea573d354c777391 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250325 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out | 12 +++++
 src/test/regress/sql/sequence.sql      |  2 +
 5 files changed, 118 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 6fa1d6586b8..36b5e46bd66 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19897,6 +19897,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 4b7c5113aab..ae6eafbe21a 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 0d29ef50ff2..3f03c220c4d 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3403,6 +3403,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..c2d6c788271 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
@@ -233,6 +239,12 @@ SELECT nextval('sequence_test'::text);
       99
 (1 row)
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 ERROR:  currval of sequence "sequence_test" is not yet defined in this session
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..46054527df1 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
@@ -124,6 +125,7 @@ SELECT setval('sequence_test'::regclass, 32);
 SELECT nextval('sequence_test'::text);
 SELECT setval('sequence_test'::regclass, 99, false);
 SELECT nextval('sequence_test'::text);
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 DISCARD SEQUENCES;
 SELECT currval('sequence_test'::regclass);
 
-- 
2.43.0

v20250325-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250325-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 57f79cd97270c5087d440ffc5ee680d2c9867e93 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 10:33:59 +0530
Subject: [PATCH v20250325 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Log a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  27 +-
 src/backend/commands/subscriptioncmds.c       | 323 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 654 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   6 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 215 ++++++
 30 files changed, 1509 insertions(+), 177 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..d4a20c5da88 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 31d269b7ee0..b53f3102764 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ae6eafbe21a..223bd2ac529 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						(long long) minv, (long long) maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..cd9ad6aa5bf 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +919,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +948,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +973,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +990,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1008,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1573,33 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1611,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1624,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1894,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2208,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2248,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2433,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index ede89ea3cf9..2b81d97c015 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -847,7 +847,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index f5012ea27bd..0f4ed1b9e30 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10899,11 +10899,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..2d8267e6ed8 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -480,7 +481,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -488,6 +489,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1348,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..196d29e5165
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,654 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch == false)
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+	/* Return the LSN when the sequence state was set. */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+				sequence_sync_error = true;
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Setting
+	 * the failure time to prevent repeated initiation of the sequencesync
+	 * worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 63174d0cdff..31ca93375a8 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index cfe638ae6af..810f38d5f90 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index ce668a0ef54..f67920b1a90 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1027,7 +1032,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1149,7 +1157,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1205,7 +1216,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1271,7 +1285,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1406,7 +1423,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2248,7 +2268,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3725,7 +3748,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4638,8 +4664,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4718,6 +4744,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4737,14 +4767,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4789,6 +4822,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 989825d3a9c..43e9dfe708a 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3354,7 +3354,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 56b6c368acf..5c5a775d40d 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 0bd1387cb61..8c940e81720 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5084,12 +5084,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 892b53c0184..59a1dfe81be 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -813,6 +813,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 92a2adf4ced..ac5853ed13c 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 3f03c220c4d..983d71b4f89 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12190,6 +12190,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..0c706bd9cd5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index bc8ad978369..172b2b96500 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4314,7 +4314,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index a43a9b192bd..1e6e4088e28 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,29 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 47478969135..d221f65b7af 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1442,6 +1442,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..94466a4f83f
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,215 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

v20250325-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250325-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 56c4aac6ab571d8f04e8572ba0325eef32c52298 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250325 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 208 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 766 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 73f0c8d89fb..52c6095eb5e 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -118,16 +119,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +150,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -277,10 +288,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -296,8 +307,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR TABLES IN SCHEMA</command>,
+   <command>FOR ALL TABLES</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -447,6 +459,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 271ae26cbaf..f5012ea27bd 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10619,7 +10625,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10639,13 +10650,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10757,6 +10768,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19634,6 +19667,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 428ed2d60fc..0bd1387cb61 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4337,6 +4337,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4367,9 +4368,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4385,6 +4386,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4405,6 +4407,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4456,8 +4460,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bbdb30b5f54..892b53c0184 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -656,6 +656,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d281e27aa67..e4743ac1b97 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3121,6 +3121,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index bf565afcc4e..c87ba4083cb 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1753,28 +1753,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1783,22 +1774,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1808,6 +1792,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1839,32 +1876,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -2087,6 +2145,12 @@ describeOneTableDetails(const char *schemaname,
 	for (i = 0; i < cols; i++)
 		printTableAddHeader(&cont, headers[i], true, 'l');
 
+	res = PSQLexec(buf.data);
+	if (!res)
+		goto error_return;
+
+	numrows = PQntuples(res);
+
 	/* Generate table cells to be printed */
 	for (i = 0; i < numrows; i++)
 	{
@@ -6402,7 +6466,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6419,13 +6483,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6536,6 +6607,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6550,6 +6622,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6559,7 +6632,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6634,6 +6718,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6647,6 +6733,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6659,15 +6747,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 98951aef82c..92a2adf4ced 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3524,12 +3524,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 23c9e3c5abf..bc8ad978369 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4246,6 +4246,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4253,6 +4269,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index b1d12585eae..fc37a8baad7 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6442,9 +6442,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..9735203fe58 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..990ae8283e0 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists all publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 3fbf5a4c212..4eb8dde4eed 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2316,6 +2316,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

#196Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#195)
Re: Logical Replication of sequences

Hi Vignesh,

Here are some review comments for patch v20250325-0001.

======
src/test/regress/expected/sequence.out

1.
+SELECT last_value, log_cnt, is_called  FROM
pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         99 |      32 | t
+(1 row)
+

I think 32 may seem like a surprising value to anybody reading these
results. Perhaps it will help if there can be a comment for this .sql
test to explain why this is the expected value.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#197Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#195)
Re: Logical Replication of sequences

Hi Vignesh,

Some review comments for patch v20250325-0002

======
Commit message

1.
Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

~

That doesn't seem as clear as it might be. Also, IIUC the "sequences
included in a publication" is not actually implemented yet -- there is
only the "all sequences" flag.

SUGGESTION
Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command)

======
doc/src/sgml/ref/create_publication.sgml

2.
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR TABLES IN SCHEMA</command>,
+   <command>FOR ALL TABLES</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
    user to be a superuser.

IMO these should all be using <literal> SGML markup same as elsewhere
on this page, not <command> markup.

======
src/backend/commands/publicationcmds.c

3.
if (!superuser_arg(newOwnerId))
{
if (form->puballtables)
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied to change owner of publication \"%s\"",
NameStr(form->pubname)),
errhint("The owner of a FOR ALL TABLES publication must be
a superuser."));
if (form->puballsequences)
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied to change owner of publication \"%s\"",
NameStr(form->pubname)),
errhint("The owner of a FOR ALL SEQUENCES publication must
be a superuser."));
if (is_schema_publication(form->oid))
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied to change owner of publication \"%s\"",
NameStr(form->pubname)),
errhint("The owner of a FOR TABLES IN SCHEMA publication
must be a superuser."));
}

I wondered if there's too much duplicated code here. Maybe it's better
to share a common ereport?

SUGGESTION

if (!superuser_arg(newOwnerId))
{
char *hint_msg = NULL;

if (form->puballtables)
hint_msg = _("The owner of a FOR ALL TABLES publication must be a
superuser.");
else if (form->puballsequences)
hint_msg = _("The owner of a FOR ALL SEQUENCES publication must be
a superuser.");
else if (is_schema_publication(form->oid))
hint_msg = _("The owner of a FOR TABLES IN SCHEMA publication must
be a superuser.");
if (hint_msg)
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied to change owner of publication \"%s\"",
NameStr(form->pubname)),
errhint(hint_msg));
}

======
src/bin/psql/describe.c

describeOneTableDetails:

4.
+ res = PSQLexec(buf.data);
+ if (!res)
+ goto error_return;
+
+ numrows = PQntuples(res);
+

Isn't this same code already done a few lines above in the same
function? Maybe I misread something.

======
src/test/regress/sql/publication.sql

5.
+-- check that describe sequence lists all publications the sequence belongs to

Might be clearer to say: "lists both" instead of "lists all"

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#198Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#195)
Re: Logical Replication of sequences

Hi Vignesh,

FYI, the patch v20250325-0004 failed to apply (atop 0001,0002,0002)
due to recent master changes.

Checking patch src/backend/commands/sequence.c...
error: while searching for:
(long long) minv, (long long) maxv)));

/* Set the currval() state only if iscalled = true */
if (iscalled)
{
elm->last = next; /* last returned number */
elm->last_valid = true;

error: patch failed: src/backend/commands/sequence.c:994
error: src/backend/commands/sequence.c: patch does not apply

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#199vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#197)
5 attachment(s)
Re: Logical Replication of sequences

On Mon, 14 Apr 2025 at 08:26, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Some review comments for patch v20250325-0002

======
Commit message

1.
Furthermore, enhancements to psql commands (\d and \dRp) now allow for better
display of publications containing specific sequences or sequences included
in a publication.

~

That doesn't seem as clear as it might be. Also, IIUC the "sequences
included in a publication" is not actually implemented yet -- there is
only the "all sequences" flag.

SUGGESTION
Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command)

Modified

======
doc/src/sgml/ref/create_publication.sgml

2.
<para>
To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <command>FOR TABLES IN SCHEMA</command>,
+   <command>FOR ALL TABLES</command> and
+   <command>FOR ALL SEQUENCES</command> clauses require the invoking
user to be a superuser.

IMO these should all be using <literal> SGML markup same as elsewhere
on this page, not <command> markup.

Modified

======
src/backend/commands/publicationcmds.c

3.
if (!superuser_arg(newOwnerId))
{
if (form->puballtables)
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied to change owner of publication \"%s\"",
NameStr(form->pubname)),
errhint("The owner of a FOR ALL TABLES publication must be
a superuser."));
if (form->puballsequences)
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied to change owner of publication \"%s\"",
NameStr(form->pubname)),
errhint("The owner of a FOR ALL SEQUENCES publication must
be a superuser."));
if (is_schema_publication(form->oid))
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied to change owner of publication \"%s\"",
NameStr(form->pubname)),
errhint("The owner of a FOR TABLES IN SCHEMA publication
must be a superuser."));
}

I wondered if there's too much duplicated code here. Maybe it's better
to share a common ereport?

SUGGESTION

if (!superuser_arg(newOwnerId))
{
char *hint_msg = NULL;

if (form->puballtables)
hint_msg = _("The owner of a FOR ALL TABLES publication must be a
superuser.");
else if (form->puballsequences)
hint_msg = _("The owner of a FOR ALL SEQUENCES publication must be
a superuser.");
else if (is_schema_publication(form->oid))
hint_msg = _("The owner of a FOR TABLES IN SCHEMA publication must
be a superuser.");
if (hint_msg)
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("permission denied to change owner of publication \"%s\"",
NameStr(form->pubname)),
errhint(hint_msg));
}

I felt the existing code is ok in this case. It will be easier to
review if the error hint is along with ereport in this case.

======
src/bin/psql/describe.c

describeOneTableDetails:

4.
+ res = PSQLexec(buf.data);
+ if (!res)
+ goto error_return;
+
+ numrows = PQntuples(res);
+

Isn't this same code already done a few lines above in the same
function? Maybe I misread something.

Modified

======
src/test/regress/sql/publication.sql

5.
+-- check that describe sequence lists all publications the sequence belongs to

Might be clearer to say: "lists both" instead of "lists all"

Modified

Regarding comment at [1]/messages/by-id/CAHut+PsgZkEegDzhJ2=DwDkrks6g6aQ6LX1-M+XBBt4PP-MX3g@mail.gmail.com. On further thinking I have removed that test
as one test is enough for that change, so the comment handling is no
more required.
Regarding comment at [2]/messages/by-id/CAHut+Pv5XMnX+QSSDhL5eqXV=kp22jyYOgFx_u7kSMhwvktvrg@mail.gmail.com. The attached patch has the rebased changes too.

The attached v20250414 version patch has the changes for the same.

[1]: /messages/by-id/CAHut+PsgZkEegDzhJ2=DwDkrks6g6aQ6LX1-M+XBBt4PP-MX3g@mail.gmail.com
[2]: /messages/by-id/CAHut+Pv5XMnX+QSSDhL5eqXV=kp22jyYOgFx_u7kSMhwvktvrg@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250414-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250414-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 2b38a729e487f69fe8559b545002c1a744b59230 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250414 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 +++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 111 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 1c5cfee25d1..4fb6416aa56 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..e542351b258 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62beb71da28..8071134643c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..4bc21c7af95 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..23341a36caa 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250414-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250414-0005-Documentation-for-sequence-synchronization.patchDownload
From 13dee73e340069e15a2516035c73bb6593ca5008 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250414 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  19 +-
 doc/src/sgml/config.sgml                  |  14 +-
 doc/src/sgml/logical-replication.sgml     | 229 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 +++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 +++++++
 7 files changed, 360 insertions(+), 35 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..8b456af3280 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c1674c22cb2..c11c4e387fb 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5169,8 +5169,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        </para>
        <para>
         In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        replication apply worker or table synchronization worker or sequence
+        synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5312,8 +5312,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5336,10 +5336,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..73d31d1f9b7 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -111,7 +111,11 @@
    accessed.  Each table can be added to multiple publications if needed.
    Publications may currently only contain tables and all tables in schema.
    Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   <literal>ALL TABLES</literal>. Publications can include sequences as well,
+   but their behavior differs from that of tables or groups of tables. Unlike
+   tables, sequences allow users to synchronize their current state at any
+   given time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,201 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   to synchronize the sequences after executing any of the above subscriber
+   commands, and will exit once the sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some test sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2314,18 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2643,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2457,7 +2658,7 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index c421d89edff..f5680347a1f 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2025,8 +2025,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..d40fc79e7be 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..6877873fb82 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 737e7489b78..21edf725843 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250414-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250414-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From feee11cdca9fc0cbad7bb5adee3a7982c12f334b Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250414 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 760 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 73f0c8d89fb..dcf1a68308f 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -118,16 +119,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +150,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -277,10 +288,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -296,8 +307,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -447,6 +459,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3c4268b271a..1c094d7d605 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19631,6 +19664,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c6e6d3b2b86..4209d8a40a6 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b426b5e4736..76aa26fa714 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -660,6 +660,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 6c03eca8e50..f953cad69ef 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3159,6 +3159,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index c916b9299a8..10dc03cd7cb 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3524,12 +3524,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d16bc208654..ddc99a6aac9 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2341,6 +2341,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250414-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250414-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From c9ed4a72421c5394c55d8cbcd8bc14855843ceb8 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250414 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 188 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 233 insertions(+), 188 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..63174d0cdff
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..b57563773e2 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -179,8 +136,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 5ce596f4576..f63d59f2036 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1028,7 +1028,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1150,7 +1150,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1206,7 +1206,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1272,7 +1272,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1407,7 +1407,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2249,7 +2249,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3726,7 +3726,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4787,7 +4787,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..a43a9b192bd 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+pg_noreturn extern void SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ddc99a6aac9..9cf79b3a019 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2903,7 +2903,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250414-0004-Enhance-sequence-synchronization-during.patchtext/x-patch; charset=US-ASCII; name=v20250414-0004-Enhance-sequence-synchronization-during.patchDownload
From 0b384179f0fa342a712a1d1e94e2c74bf8743c0d Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 14 Apr 2025 09:19:07 +0530
Subject: [PATCH v20250414 4/5] Enhance sequence synchronization during 
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Log a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeat until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG17 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG17 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiates the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiates the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  27 +-
 src/backend/commands/subscriptioncmds.c       | 323 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 654 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   6 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 215 ++++++
 30 files changed, 1509 insertions(+), 177 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..d4a20c5da88 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	Publication *publication;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 15efb02badb..998fc05d7c2 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index e542351b258..f3d6abc7ad1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..cd9ad6aa5bf 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables or tables in schemas.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables or tables in schema.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +919,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +948,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +973,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +990,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1008,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1054,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				sub_remove_rels[remove_rel_len].relid = relid;
+				sub_remove_rels[remove_rel_len++].state = state;
 
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1008,6 +1109,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 */
 		for (off = 0; off < remove_rel_len; off++)
 		{
+			/* Skip relations belonging to sequences. */
+			if (get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)
+				continue;
+
 			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
 				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
 			{
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1573,33 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1611,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1624,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1871,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1894,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2208,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2248,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2433,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 1c094d7d605..d470c1cd2fa 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..2d8267e6ed8 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -480,7 +481,7 @@ retry:
 			break;
 
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -488,6 +489,14 @@ retry:
 			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication tablesync worker");
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "unknown worker type");
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1348,6 +1388,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_UNKNOWN:
 				/* Should never happen. */
 				elog(ERROR, "unknown worker type");
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..196d29e5165
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,654 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there are free sync worker slot(s), start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			tableRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(tableRow), tableRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch == false)
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+	/* Return the LSN when the sequence state was set. */
+	return seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+				sequence_sync_error = true;
+
+			report_mismatched_sequences(mismatched_seqs);
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Setting
+	 * the failure time to prevent repeated initiation of the sequencesync
+	 * worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 63174d0cdff..31ca93375a8 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * This is declared as static, since the same value can be used until the
+	 * system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b57563773e2..bb010479f2c 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f63d59f2036..8f9a5d88182 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1027,7 +1032,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1149,7 +1157,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1205,7 +1216,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1271,7 +1285,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1406,7 +1423,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2248,7 +2268,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3725,7 +3748,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4638,8 +4664,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4718,6 +4744,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4737,14 +4767,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4789,6 +4822,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 60b12446a1c..52f4b579a44 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 56b6c368acf..5c5a775d40d 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 4209d8a40a6..bc40cea2f92 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 76aa26fa714..b43c44e4b05 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -817,6 +817,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 10dc03cd7cb..6fddb5ea635 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8071134643c..a84fb506571 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12268,6 +12268,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..0c706bd9cd5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index a43a9b192bd..1e6e4088e28 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,29 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..2c4d1b78649 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..94466a4f83f
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,215 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

#200Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#195)
Re: Logical Replication of sequences

Hi Vignesh,

Some review comments for v20250525-0004.

======
Commit message

1.
A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
a) Retrieves remote values of sequences with pg_sequence_state() INIT.
b) Log a warning if the sequence parameters differ between the
publisher and subscriber.
c) Sets the local sequence values accordingly.
d) Updates the local sequence state to READY.
e) Repeat until all done; Commits synchronized sequences in batches of 100

~

/Log a warning/Logs a warning/
/Repeat until all done/Repeats until all done/

~~~

2.
1) CREATE SUBSCRIPTION
- (PG17 command syntax is unchanged)
- The subscriber retrieves sequences associated with publications.
- Published sequences are added to pg_subscription_rel with INIT state.
- Initiates the sequencesync worker (see above) to synchronize all
sequences.

~

2a.
Since PG18 is frozen now I think you can say "PG18 command syntax is unchanged"
(replace same elsewhere in this commit message)

~

2b.
/Initiates/Initiate/
(replace same elsewhere in this commit message)

======
src/backend/catalog/pg_publication.c

pg_get_publication_sequences:

3.
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char    *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List    *sequences = NIL;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {

The 'pubname' and 'publication' variables can be declared later,
within the SRF_IS_FIRSTCALL block.

======
src/backend/commands/subscriptioncmds.c

CreateSubscription:

4.
+ /*
+ * XXX: If the subscription is for a sequence-only publication, creating
+ * this origin is unnecessary. It can be created later during the ALTER
+ * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+ * include tables or tables in schemas.
+ */

Since it already says "to include tables", I didn't think you needed
to say "tables in schemas".

~~~

5.
+ *
+ * XXX: If the subscription is for a sequence-only publication,
+ * creating this slot is unnecessary. It can be created later
+ * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+ * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+ * publication is updated to include tables or tables in schema.
  */

(same comment as above #4).

I thought maybe it is redundant to say "or tables in schema".

~~~

AlterSubscription_refresh:

6.
+#ifdef USE_ASSERT_CHECKING
+ if (resync_all_sequences)
+ Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+

Maybe this can have a comment like /* Sanity checks for parameter values */

~~~

7.
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
  /*
- * For READY state, we would have already dropped the
- * tablesync origin.
+ * A single sequencesync worker synchronizes all sequences, so
+ * only stop workers when relation kind is not sequence.
  */
- if (state != SUBREL_STATE_READY)
+ if (relkind != RELKIND_SEQUENCE)

Should those assignments...:
sub_remove_rels[remove_rel_len].relid = relid;
sub_remove_rels[remove_rel_len++].state = state;

...be done only inside the "if (relkind != RELKIND_SEQUENCE)". It
seems like they'll be skipped anyway in subsequent code -- see "if
(get_rel_relkind(sub_remove_rels[off].relid) == RELKIND_SEQUENCE)".
Perhaps if these assignments are moved, then the subsequent skipping
code is also not needed anymore?

======
src/backend/replication/logical/launcher.c

logicalrep_worker_launch:

8.
+ case WORKERTYPE_SEQUENCESYNC:
+ snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+ snprintf(bgw.bgw_name, BGW_MAXLEN,
+ "logical replication sequencesync worker for subscription %u",
+ subid);
+ snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+ break;
+

Previously all these cases were in alphabetical order. Maybe you can
move this case to keep it that way.

~~~

pg_stat_get_subscription:

9.
  case WORKERTYPE_TABLESYNC:
  values[9] = CStringGetTextDatum("table synchronization");
  break;
+ case WORKERTYPE_SEQUENCESYNC:
+ values[9] = CStringGetTextDatum("sequence synchronization");
+ break;

Previously all these cases were in alphabetical order. Maybe you can
move this case to keep it that way.

======
.../replication/logical/sequencesync.c

ProcessSyncingSequencesForApply:

10.
+ * To prevent starting the sequencesync worker at a high frequency after a
+ * failure, we store its last failure time. We start the sequencesync worker
+ * again after waiting at least wal_retrieve_retry_interval.

I felt this comment might be better inside the function where it is
doing the TimestampDifferenceExceeds check.

~~~

10.
+ if (!started_tx)
+ {
+ StartTransactionCommand();
+ started_tx = true;
+ }
+
+ Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);

Maybe the Assert should come 1st before the tx stuff?

~~~

11.
+ /*
+ * If there are free sync worker slot(s), start a new sequencesync
+ * worker, and break from the loop.
+ */

Why plural? Can't you just say.

SUGGESTION:
If there is a free sync worker slot, start a new sequencesync worker,
and break from the loop.

~~~

fetch_remote_sequence_data:

12.
+ Oid tableRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+ LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};

Is 'tableRow' a good name for this? Calling it 'seqRow' might be better.

~~~

13.
+ seq_params_match = seqform->seqtypid == seqtypid &&
+ seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+ seqform->seqcycle == seqcycle &&
+ seqform->seqstart == seqstart &&
+ seqform->seqincrement == seqincrement;

By the time the WARNING for this mismatch gets logged, the knowledge
of *what* differed seems lost. Maybe it is not possible, but I
wondered if it would make the warning much more useful if you could
somehow also log attribute values. That will help the user understand
what caused the clash in the first place. Otherwise they will have to
go to the trouble to try to figure it out for themselves.

~~~

copy_sequence:

14.
+/*
+ * Copy existing data of a sequence from publisher.

/from/from the/

~~~

15.
+ Oid tableRow[] = {OIDOID, CHAROID};

Should this be 'seqRow' or 'relRow'?

~~~

16.
+ *sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+ nspname, relname,
+ &seq_log_cnt, &seq_is_called,
+ &seq_page_lsn, &seq_last_value);
+
+ /* Update the sequence only if the parameters are identical. */
+ if (*sequence_mismatch == false)
+ SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+ seq_log_cnt);
+
+ /* Return the LSN when the sequence state was set. */
+ return seq_page_lsn;

16a.
Is that a bug in the code? AFAICT the fetch_remote_sequence_data is
going to overwrite the new 'seq_page_lsn' even if some mismatch is
detected. Is that intentional?

~

16b.
Why not say "if (!*sequence_mismatch)"

~

16c.
Since it is not 100% clear from this code what will be the value of
seq_page_lsn when if there was a mismatch, maybe you should have a
more explicit return here:

SUGGESTION
return *sequence_mismatch ? InvalidXLogRecPtr : seq_page_lsn;

~~~

append_mismatched_sequences:

17.
+/*
+ * append_mismatched_sequences
+ *
+ * Appends details of sequences that have discrepancies between the publisher
+ * and subscriber to the mismatched_seqs string.
+ */

Hmm. It would be good if it did include sequence details, but I think
for now there are no real "details of sequences" here. just the
schemaname and seqname.

~~~

LogicalRepSyncSequences:

18.
+/*
+ * Synchronizing each sequence individually incurs overhead from starting
+ * and committing a transaction repeatedly. Additionally, we want to avoid
+ * keeping transactions open for extended periods by setting excessively
+ * high values.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100

Just saying "by setting excessively high values." doesn't really have
any context. high values of what? You have to guess what it means.

I think it is more like below.

SUGGESTION
We batch synchronize multiple sequences per transaction, because the
alternative of synchronizing each sequence individually incurs
overhead of starting and committing transactions repeatedly. On the
other hand, we want to avoid keeping this batch transaction open for
extended periods so it is currently limited to 100 sequences per
batch.

~~~

19.
+ /*
+ * In case sequence copy fails, throw a warning for the sequences that
+ * did not match before exiting.
+ */
+ PG_TRY();
+ {
+ sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+ &sequence_mismatch);
+ }
+ PG_CATCH();
+ {
+ if (sequence_mismatch)
+ append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+ report_mismatched_sequences(mismatched_seqs);
+ PG_RE_THROW();
+ }

If we got to the CATCH then it means some ERROR happened, but at that
point I really don't think sequence_mismatch is likely to be set as
true. Maybe it is you just being extra careful, "just in case" ?

~~~

20.
+ if (mismatched_seqs->len)
+ sequence_sync_error = true;
+
+ report_mismatched_sequences(mismatched_seqs);

I think you can put that call to report_mismatched_sequences under the
same condition, because if there are no mismatches then there will be
nothing to report anyhow.

~~~

21.
+ /*
+ * Sequence synchronization failed due to a parameter mismatch. Setting
+ * the failure time to prevent repeated initiation of the sequencesync
+ * worker.
+ */

/Setting/Set/

/to prevent repeated initiation/to prevent immediate initiation/ (??)

======
src/backend/replication/logical/syncutils.c

FetchRelationStates:

22.
+ /*
+ * This is declared as static, since the same value can be used until the
+ * system table is invalidated.
+ */
  static bool has_subtables = false;
/This/has_subtables/

======
src/backend/replication/logical/tablesync.c

ProcessSyncingTablesForApply:

23.
+ if (!started_tx)
+ {
+ StartTransactionCommand();
+ started_tx = true;
+ }
+
+ Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+

Should this Assert come before the other tx code?

~~~

AllTablesyncsReady:

24.
+ bool has_tables = false;

  /* We need up-to-date sync state info for subscription tables here. */
- has_subrels = FetchRelationStates(&started_tx);
-
- if (started_tx)
- {
- CommitTransactionCommand();
- pgstat_report_stat(true);
- }
+ has_tables = FetchRelationStates();

Don't need to assign has_tables to false if the value will be
immediately overwritten anyhow.

======
src/bin/pg_dump/pg_dump.c

getSubscriptionRelations:

25.
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)

Although you changed the function command and the function name for ,
there is still code within that function referring to tables. Should
that also be changed to relations?

======
src/include/commands/sequence.h

26.
+#define SEQ_LOG_CNT_INVALID 0

Zero seemed like a curious value to use as the "invalid" count. I was
wondering would it be better to define this as -1, but then in the
SetSequence function do some explicit code like below:

seq->log_cnt = log_cnt == SEQ_LOG_CNT_INVALID ? 0 : log_cnt;

======
src/test/subscription/t/036_sequences.pl

27.
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');

I think this deserves some explanatory comment about the magic number
32? But, it may be better to have a general comment at the top of this
TAP test to explain other magic numbers like 31 etc...

~~~

28.
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+ 'REFRESH PUBLICATION does not sync existing sequence');

This test would be clearer if you also checked those same sequence
values at the publisher side, to show they are different. Don't need
to do it every time, but maybe just this first time would be good.

~~~

29.
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+ 'REFRESH PUBLICATION will sync newly published sequence');
+

(This is the copy_data=false test)

29a.
Maybe it is good here also to show the sequence value at the publisher
to see if it is different.

~

29b.
The message 'REFRESH PUBLICATION will sync newly published sequence'
seems wrong because the values are NOT synced when copy_data=false

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#201Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#195)
Re: Logical Replication of sequences

Hi Vignesh,

Some review comments for patch v20250325-0005 (docs).

======
doc/src/sgml/catalogs.sgml

(52.55. pg_subscription_rel)

1.
State code: i = initialize, d = data is being copied, f = finished
table copy, s = synchronized, r = ready (normal replication)

~

This is not part of the patch, but AFAIK not all of those states are
relevant if the srrelid was a SEQUENCE relation. Should there be 2
sets of states given here for what is possible for tables/sequences?

======
doc/src/sgml/config.sgml

(19.6.4. Subscribers)

2.
        <para>
         In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        replication apply worker or table synchronization worker or sequence
+        synchronization worker will be respawned.
        </para>

Does it though? I thought /often/quickly/ because if there is some
ERROR the respawning may occur again and again forever, so you are not
really limiting how often it occurs, only the *rate* at which the
respawning happens.

======
doc/src/sgml/logical-replication.sgml

(29.1 Publication)

3.
    Publications may currently only contain tables and all tables in schema.
    Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   <literal>ALL TABLES</literal>. Publications can include sequences as well,
+   but their behavior differs from that of tables or groups of tables. Unlike
+   tables, sequences allow users to synchronize their current state at any
+   given time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>

This doesn't really make sense. The change seems too obviously just
tacked onto the end of the existing documentation. e.g It is strange
saying "Publications may currently only contain tables and all tables
in schema." and then later saying Oh, BTW "Publications can include
sequences as well". I think the whole paragraph may need some
reworking.

The preceding para on this page also still says "A publication is a
set of changes generated from a table or a group of tables" which also
failed to mention sequences.

OTOH, I think it would get too confusing to mush everything together,
so after saying publish tables and sequences, then I think there
should be one paragraph that just talks about publishing tables,
followed by another paragraph that just talks about publishing
sequences. Probably this should mention ALL SEQUENCES too since it
already mentioned ALL TABLES.

~~~

(29.6. Replicating Sequences)

4.
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish the
+   sequence using <link
linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>

/first publish the sequence/first publish them/

~~~

5.
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   to synchronize the sequences after executing any of the above subscriber
+   commands, and will exit once the sequences are synchronized.
+  </para>

IMO the name of the worker makes it obvious what it does so I think
you can remove the redundant words "to synchronize the sequences" from
this sentence.

~~~

(29.7.1. Sequence Definition Mismatches)

6.
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Subsequently, execute <link
linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES</command></link>.
+   </para>

/Subsequently,/Then,/

~~~

(29.6.3. Examples)

7.
+   <para>
+    Create some test sequences on the publisher.

This whole section is just examples, so I don't think you need to call
them "test" sequences. Just say "Create some sequences on the
publisher.".

~~~

(29.9. Restrictions)

8.
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, but the sequence itself would still show the start value on
+     the subscriber.  If the subscriber is used as a read-only database, then
+     this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either
by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>

This doesn't seem strictly correct to say "but the sequence itself
would still show the start value on the subscriber". AFAIK,
synchronization also happens on the CREATE SUBSCRIPTION command when
copy_data=true, so if any sequences had been published (FOR ALL
SEQUENCES) then the subscriber sequence would get the up-to-date
current values (not the "start value"), right?

~~~

(29.13.2. Subscribers)

9.
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
      controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     subscription initialization or when new tables or sequences are added.
    </para>

This seems kind of misleading because there is no "amount of
parallelism" for sequences since there is never more than one sequence
sync worker. Maybe there is a more accurate way to word this.

SUGGESTION (maybe like this?)
max_sync_workers_per_subscription controls how many tables can be
synchronized in parallel during subscription initialization or when
new tables are added. One additional worker is also needed for
sequence synchronization.

======
doc/src/sgml/ref/alter_subscription.sgml

(ALTER SUBSCRIPTION REFRESH PUBLICATION / copy_data)

10.
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for
recommendations on how
+          to handle any warnings about differences in the sequence definition
+          between the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>

/any warnings about differences in the sequence definition between/any
warnings about sequence definition differences between/

~~~

(ALTER SUBSCRIPTION REFRESH PUBLICATION SEQUENCES)

11.
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about differences in the
+      sequence definition between the publisher and the subscriber.
+     </para>

Ditto my previous review comment #10.

======
doc/src/sgml/ref/create_subscription.sgml

(CREATE SUBSCRIPTION / copy_data)

12.
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about
differences in
+          the sequence definition between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>

Ditto my previous review comment #10.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#202vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#200)
5 attachment(s)
Re: Logical Replication of sequences

On Tue, 15 Apr 2025 at 12:03, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Some review comments for v20250525-0004.

======
Commit message
ProcessSyncingSequencesForApply:

10.
+ if (!started_tx)
+ {
+ StartTransactionCommand();
+ started_tx = true;
+ }
+
+ Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);

Maybe the Assert should come 1st before the tx stuff?

The existing is correct, get_rel_kind call requires transaction to be started.

13.
+ seq_params_match = seqform->seqtypid == seqtypid &&
+ seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+ seqform->seqcycle == seqcycle &&
+ seqform->seqstart == seqstart &&
+ seqform->seqincrement == seqincrement;

By the time the WARNING for this mismatch gets logged, the knowledge
of *what* differed seems lost. Maybe it is not possible, but I
wondered if it would make the warning much more useful if you could
somehow also log attribute values. That will help the user understand
what caused the clash in the first place. Otherwise they will have to
go to the trouble to try to figure it out for themselves.

I felt it might be ok for users to get this using the sequence name.

16.
+ *sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+ nspname, relname,
+ &seq_log_cnt, &seq_is_called,
+ &seq_page_lsn, &seq_last_value);
+
+ /* Update the sequence only if the parameters are identical. */
+ if (*sequence_mismatch == false)
+ SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+ seq_log_cnt);
+
+ /* Return the LSN when the sequence state was set. */
+ return seq_page_lsn;

16a.
Is that a bug in the code? AFAICT the fetch_remote_sequence_data is
going to overwrite the new 'seq_page_lsn' even if some mismatch is
detected. Is that intentional?

In the caller we set the sequence lsn only if sequence_mismatch is
false, so there is no issue.

19.
+ /*
+ * In case sequence copy fails, throw a warning for the sequences that
+ * did not match before exiting.
+ */
+ PG_TRY();
+ {
+ sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+ &sequence_mismatch);
+ }
+ PG_CATCH();
+ {
+ if (sequence_mismatch)
+ append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+ report_mismatched_sequences(mismatched_seqs);
+ PG_RE_THROW();
+ }

If we got to the CATCH then it means some ERROR happened, but at that
point I really don't think sequence_mismatch is likely to be set as
true. Maybe it is you just being extra careful, "just in case" ?

In this function we copy the sequences_not_synced sequences one by
one, while copying the sequence if there is an error like sequence
type or min or max etc don't match , sequence_mismatch will be set.
Later while copying another sequence if an exception is raised and we
reach catch block, we report an error this case.

21.
+ /*
+ * Sequence synchronization failed due to a parameter mismatch. Setting
+ * the failure time to prevent repeated initiation of the sequencesync
+ * worker.
+ */

/to prevent repeated initiation/to prevent immediate initiation/ (??)

I felt repeated is correct here as we don't want to repeatedly start
the sequence sync worker after every failure.

23.
+ if (!started_tx)
+ {
+ StartTransactionCommand();
+ started_tx = true;
+ }
+
+ Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+

Should this Assert come before the other tx code?

The existing is correct, get_rel_kind call requires a transaction to be started.

25.
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)

Although you changed the function command and the function name for ,
there is still code within that function referring to tables. Should
that also be changed to relations?

You are talking about the error message right, I have changed that.

======
src/include/commands/sequence.h

26.
+#define SEQ_LOG_CNT_INVALID 0

Zero seemed like a curious value to use as the "invalid" count. I was
wondering would it be better to define this as -1, but then in the
SetSequence function do some explicit code like below:

seq->log_cnt = log_cnt == SEQ_LOG_CNT_INVALID ? 0 : log_cnt;

I felt using 0 in this case is ok.

======
src/test/subscription/t/036_sequences.pl

27.
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');

I think this deserves some explanatory comment about the magic number
32? But, it may be better to have a general comment at the top of this
TAP test to explain other magic numbers like 31 etc...

log_cnt is prefetched sequence values, it is not a special magic
value. I felt no need to add any comment for this.

Rest of the comments are fixed.

Regarding the comments from [1]/messages/by-id/CAHut+Ptc0gG=4j_BVqxHRGa=TKY_PsYu0RdsT6YuWPiNkSRhOQ@mail.gmail.com.
1. State code: i = initialize, d = data is being copied, f = finished
table copy, s = synchronized, r = ready (normal replication)
~
This is not part of the patch, but AFAIK not all of those states are
relevant if the srrelid was a SEQUENCE relation. Should there be 2
sets of states given here for what is possible for tables/sequences?

I have updated the states which are not applicable for sequences in
the same page.

3.     Publications may currently only contain tables and all tables in schema.
    Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   <literal>ALL TABLES</literal>. Publications can include sequences as well,
+   but their behavior differs from that of tables or groups of tables. Unlike
+   tables, sequences allow users to synchronize their current state at any
+   given time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>

This doesn't really make sense. The change seems too obviously just
tacked onto the end of the existing documentation. e.g It is strange
saying "Publications may currently only contain tables and all tables
in schema." and then later saying Oh, BTW "Publications can include
sequences as well". I think the whole paragraph may need some
reworking.

The preceding para on this page also still says "A publication is a
set of changes generated from a table or a group of tables" which also
failed to mention sequences.

OTOH, I think it would get too confusing to mush everything together,
so after saying publish tables and sequences, then I think there
should be one paragraph that just talks about publishing tables,
followed by another paragraph that just talks about publishing
sequences. Probably this should mention ALL SEQUENCES too since it
already mentioned ALL TABLES.

I did not create a separate paragraph for tables/sequences as there is
a separate section for sequence with the reference.

Rest of the comments are fixed. The attached v20250416 version patch
has the changes for the same.

[1]: /messages/by-id/CAHut+Ptc0gG=4j_BVqxHRGa=TKY_PsYu0RdsT6YuWPiNkSRhOQ@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250416-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250416-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 39731588dec00a6dd9585ec49086656621565f90 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250416 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 +++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 111 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 1c5cfee25d1..4fb6416aa56 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..e542351b258 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62beb71da28..8071134643c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..4bc21c7af95 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..23341a36caa 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250416-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250416-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 7328d57e7bfd416c4a7dd06b5c9a9c0d4b80c2a4 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250416 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 760 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 73f0c8d89fb..dcf1a68308f 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -118,16 +119,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +150,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -277,10 +288,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -296,8 +307,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -447,6 +459,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3c4268b271a..1c094d7d605 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19631,6 +19664,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c6e6d3b2b86..4209d8a40a6 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b426b5e4736..76aa26fa714 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -660,6 +660,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 6c03eca8e50..f953cad69ef 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3159,6 +3159,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index c916b9299a8..10dc03cd7cb 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3524,12 +3524,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d16bc208654..ddc99a6aac9 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2341,6 +2341,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250416-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250416-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 21802cb1a83359c5fd7f10c66060b86ef947468c Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250416 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 188 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 233 insertions(+), 188 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..63174d0cdff
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..b57563773e2 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -179,8 +136,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 5ce596f4576..f63d59f2036 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1028,7 +1028,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1150,7 +1150,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1206,7 +1206,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1272,7 +1272,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1407,7 +1407,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2249,7 +2249,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3726,7 +3726,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4787,7 +4787,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..a43a9b192bd 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+pg_noreturn extern void SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ddc99a6aac9..9cf79b3a019 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2903,7 +2903,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250416-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250416-0005-Documentation-for-sequence-synchronization.patchDownload
From 7838a2217cde2431b2897906fdc0d9b9a8f848e6 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250416 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  25 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 241 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 372 insertions(+), 43 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..4f149656836 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8209,9 +8212,9 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
       <para>
        State code:
        <literal>i</literal> = initialize,
-       <literal>d</literal> = data is being copied,
-       <literal>f</literal> = finished table copy,
-       <literal>s</literal> = synchronized,
+       <literal>d</literal> = data is being copied (not applicable for sequences),
+       <literal>f</literal> = finished table copy (not applicable for sequences),
+       <literal>s</literal> = synchronized (not applicable for sequences),
        <literal>r</literal> = ready (normal replication)
       </para></entry>
      </row>
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c1674c22cb2..daab5686b76 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5168,9 +5168,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5312,8 +5312,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5336,10 +5336,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..ac4abf67feb 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,19 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
+   Publications may currently only contain sequences or tables/all tables in schema.
    Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   <literal>ALL TABLES</literal> and <literal>ALL SEQUENCES</literal>.
+   Unlike tables, sequences allow users to synchronize their current state at
+   any given time. For more information, refer to <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1789,201 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2313,22 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher either during the initial
+     <command>CREATE SUBSCRIPTION</command> or
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>.
+     If the subscriber is used as a read-only database, then this should
+     typically not be a problem.  If, however, some kind of switchover or
+     failover to the subscriber database is intended, then the sequences would
+     need to be updated to the latest values, either by executing
+     <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2646,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2456,8 +2660,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index c421d89edff..f5680347a1f 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2025,8 +2025,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..c474e37c03e 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 737e7489b78..21edf725843 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250416-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250416-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 89356f70e50d1a3785fd941e51fbfba21df451bc Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 14 Apr 2025 09:19:07 +0530
Subject: [PATCH v20250416 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  27 +-
 src/backend/commands/subscriptioncmds.c       | 322 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 658 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 227 ++++++
 30 files changed, 1524 insertions(+), 179 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 15efb02badb..998fc05d7c2 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index e542351b258..f3d6abc7ad1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..2e7e86cffa9 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +920,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +949,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +974,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +991,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1009,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1055,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1495,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1510,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1551,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1570,33 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1608,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1621,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1868,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1891,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2205,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 1c094d7d605..d470c1cd2fa 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..ae8f5cb522a 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..5673aaf715d
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,658 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync worker,
+		 * and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, seqRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			seqRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data,
+					  lengthof(seqRow), seqRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (!*sequence_mismatch)
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+	/* Return the LSN when the sequence state was set. */
+	return *sequence_mismatch ? InvalidXLogRecPtr : seq_page_lsn;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends schema name and sequence name of sequences that have discrepancies
+ * between the publisher and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * We batch synchronize multiple sequences per transaction, because the
+ * alternative of synchronizing each sequence individually incurs overhead of
+ * starting and committing transactions repeatedly. On the other hand, we want
+ * to avoid keeping this batch transaction open for extended periods so it is
+ * currently limited to 100 sequences per batch.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			if (sequence_mismatch)
+				append_mismatched_sequences(mismatched_seqs, sequence_rel);
+
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+			{
+				sequence_sync_error = true;
+				report_mismatched_sequences(mismatched_seqs);
+			}
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Set the
+	 * failure time to prevent repeated initiation of the sequencesync worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 63174d0cdff..fa23621a0a8 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b57563773e2..bfced60976f 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f63d59f2036..8f9a5d88182 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1027,7 +1032,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1149,7 +1157,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1205,7 +1216,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1271,7 +1285,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1406,7 +1423,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2248,7 +2268,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3725,7 +3748,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4638,8 +4664,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4718,6 +4744,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4737,14 +4767,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4789,6 +4822,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 60b12446a1c..52f4b579a44 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 56b6c368acf..5c5a775d40d 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 4209d8a40a6..5ae63717fb2 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 76aa26fa714..b43c44e4b05 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -817,6 +817,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 10dc03cd7cb..6fddb5ea635 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8071134643c..a84fb506571 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12268,6 +12268,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..0c706bd9cd5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index a43a9b192bd..1e6e4088e28 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,29 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..2c4d1b78649 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..cf5904f3e06
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,227 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

#203Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#202)
Re: Logical Replication of sequences

Hi Vignesh,

No comments for patch v20250416-0001
No comments for patch v20250416-0002
No comments for patch v20250416-0003

Here are some comments for patch v20250416-0004

======
src/backend/catalog/system_views.sql

1.
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+

Should we have some regression tests for this view?

SUGGESTION
test_pub=# CREATE SEQUENCE S1;
test_pub=# CREATE SEQUENCE S2;
test_pub=# CREATE PUBLICATION PUB1 FOR ALL SEQUENCES;
test_pub=# SELECT * FROM pg_publication_sequences;
pubname | schemaname | sequencename
---------+------------+--------------
pub1 | public | s1
pub1 | public | s2
(2 rows)

======
.../replication/logical/sequencesync.c

copy_sequence:

2.
+ res = walrcv_exec(conn, cmd.data,
+   lengthof(seqRow), seqRow);

Unnecessary wrapping.

~~~

Vignesh 16/4 answered my previous review comment #16
In the caller we set the sequence lsn only if sequence_mismatch is
false, so there is no issue.

PS REPLY 17/4. No, I don’t see that. I think
fetch_remote_sequesnce_data is unconditionally assigning to the
*page_lsn output parameter (aka seq_page_lsn). Anyway, it does not
matter anymore since the return from copy_sequence function is now
fixed.

~~~

3.
+ /* Update the sequence only if the parameters are identical. */
+ if (!*sequence_mismatch)
+ SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+ seq_log_cnt);
+
+ /* Return the LSN when the sequence state was set. */
+ return *sequence_mismatch ? InvalidXLogRecPtr : seq_page_lsn;

It might be simpler to have a single condition instead checking
*sequence_mismatch twice.

SUGGESTION
/* Update the sequence only if the parameters are identical. */
if (*sequence_mismatch)
return InvalidXLogRecPtr;
else
{
SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
seq_log_cnt);
return seq_page_lsn;
}

~~~

LogicalRepSyncSequences:

Vignesh 16/4 answered my previous comment #19:
In this function we copy the sequences_not_synced sequences one by
one, while copying the sequence if there is an error like sequence
type or min or max etc don't match , sequence_mismatch will be set.
Later while copying another sequence if an exception is raised and we
reach catch block, we report an error this case.

PS REPLY 17/4. I didn’t understand your explanation. I think anything
that causes sequence_mismatch to be assigned true is just an internal
logic state. It is not something that will be “thrown” and caught by
the PG_CATCH. Therefore, I did not understand why the “if
(sequence_mismatch)” needed to be within the PG_CATCH block.

~~~

Vignesh 16/4 answered my previous review comment #21:
I felt repeated is correct here as we don't want to repeatedly start
the sequence sync worker after every failure.

PS REPLY 17/4
Hm. Is that correct? AFAIK we still will "repeatedly" start the
sequence syn worker after a failure. I think the failure only *slows
down* the respawn of the worker because it will use the
TimestampDifferenceExceeds check if there had been a failure. That's
why I suggested s/to prevent repeated initiation/to prevent immediate
initiation/.

======
src/bin/pg_dump/pg_dump.c

getSubscriptionRelations:

Vignesh 16/4 answered my previous review comment #25:
You are talking about the error message right, I have changed that.

PS REPLY 17/4
Yes, the error message, but also I thought 'tblinfo' var and
FindTableByOid function name should refer to relations instead of
tables?

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#204Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#202)
Re: Logical Replication of sequences

Hi Vignesh,

Some review comments for patch v20250416-0005 (docs)

======
doc/src/sgml/catalogs.sgml

(52.55. pg_subscription_rel)

1.
       <para>
        State code:
        <literal>i</literal> = initialize,
-       <literal>d</literal> = data is being copied,
-       <literal>f</literal> = finished table copy,
-       <literal>s</literal> = synchronized,
+       <literal>d</literal> = data is being copied (not applicable
for sequences),
+       <literal>f</literal> = finished table copy (not applicable for
sequences),
+       <literal>s</literal> = synchronized (not applicable for sequences),
        <literal>r</literal> = ready (normal replication)
       </para></entry>

Would this be simpler if you used separate paragraphs for tables and sequences?

SUGGESTION
State codes for tables: i = initialize, d = data is being copied, f =
finished table copy, s = synchronized, d = data is being copied, f =
finished table copy, s = synchronized, r = ready (normal replication)

State codes for sequences: i = initialize, r = ready (normal replication)

======
doc/src/sgml/logical-replication.sgml

(29.1 Publication)

2.
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only
one database.
+   generated from a table or a group of tables or current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.

/or current state/or the current state/

~~~

3.
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
+   Publications may currently only contain sequences or tables/all
tables in schema.
    Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   <literal>ALL TABLES</literal> and <literal>ALL SEQUENCES</literal>.

I don't think you need to say "/all tables in schema" because here we
are talking about the *type* of objects that can be in the
publication. OTOH, the FOR TABLES IN SCHEMA should be in the next
sentence.

SUGGESTION
Publications may currently only contain tables or sequences. Objects
must be added explicitly, except when a publication is created using
FOR TABLES IN SCHEMA, or FOR ALL TABLES, or FOR ALL SEQUENCES.

~~~

(29.7.2. Refreshing Stale Sequences)

4.
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES</command></link>.
+   </para>

/sequences values/sequence values/

Should we elaborate or give an example here, exactly how the user
should "compare the sequence values between the publisher and
subscriber".

~~~

(29.9. Restrictions)

5.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher either during the initial
+     <command>CREATE SUBSCRIPTION</command> or
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>.

You haven't mentioned ALTER SUBSCRIPTION ... REFRESH PUBLICATION, but
AFAIK that could also synchronize the latest values for any newly
added sequences. It seems a bit tedious to name all these commands all
the time, but I am not sure if there is a better way.

======
doc/src/sgml/ref/alter_subscription.sgml

6.
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for
recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>

I have a question about functionality: I understand we do not actually
"synchronize" sequence data at this time if the copy_data=false, but
OTOH, shouldn't we still be checking (and WARNING) if there are any
pub/sub sequences difference detected, regardless of the copy_data
bool value? Otherwise, I think all we are doing is deferring the
checking/warning until later (e.g. during REFRESH PUBLICATION
SEQUENCES). Isn't it is better to get the warning earlier so the user
can fix it earlier?

======
doc/src/sgml/ref/create_subscription.sgml

7.
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>

ditto previous review comment #6.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#205vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#203)
5 attachment(s)
Re: Logical Replication of sequences

On Thu, 17 Apr 2025 at 13:52, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

No comments for patch v20250416-0001
No comments for patch v20250416-0002
No comments for patch v20250416-0003

Here are some comments for patch v20250416-0004

======
src/backend/catalog/system_views.sql

1.
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+

Should we have some regression tests for this view?

SUGGESTION
test_pub=# CREATE SEQUENCE S1;
test_pub=# CREATE SEQUENCE S2;
test_pub=# CREATE PUBLICATION PUB1 FOR ALL SEQUENCES;
test_pub=# SELECT * FROM pg_publication_sequences;
pubname | schemaname | sequencename
---------+------------+--------------
pub1 | public | s1
pub1 | public | s2
(2 rows)

I felt it is not required, as this will be verified from create/alter
subscription.

======
src/bin/pg_dump/pg_dump.c

getSubscriptionRelations:

Vignesh 16/4 answered my previous review comment #25:
You are talking about the error message right, I have changed that.

PS REPLY 17/4
Yes, the error message, but also I thought 'tblinfo' var and
FindTableByOid function name should refer to relations instead of
tables?

I felt no need to change these things and bring a lot of differences
between the back branches.

The rest of the comments were fixed.

Regarding the below comments from [1].
4.
+   <para>
+    To verify, compare the sequences values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES</command></link>.
+   </para>

/sequences values/sequence values/

Should we elaborate or give an example here, exactly how the user
should "compare the sequence values between the publisher and
subscriber".

I felt it was obvious, so no need for an example in this case.

6.
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for
recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>

I have a question about functionality: I understand we do not actually
"synchronize" sequence data at this time if the copy_data=false, but
OTOH, shouldn't we still be checking (and WARNING) if there are any
pub/sub sequences difference detected, regardless of the copy_data
bool value? Otherwise, I think all we are doing is deferring the
checking/warning until later (e.g. during REFRESH PUBLICATION
SEQUENCES). Isn't it is better to get the warning earlier so the user
can fix it earlier?

I noticed the similar case with tables.
example:
pub:
create table t1(c1 int, c2 int);
create publication pub1 for table t1;
sub:
create table t1(c1 int);
create subscription sub1 connection ... publication pub1 with (copy_data=off);

In this case, we will not detect the error during create subscription
but at a later insert.
As the suggested case is similar to above, I felt it is ok.

======
doc/src/sgml/ref/create_subscription.sgml

7.
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>

ditto previous review comment #6.

This is similar to the above comment.

The rest of the comments were fixed. The attached v20250422 version
patch has the changes for the same.
[1]: /messages/by-id/CAHut+Ps2LzJwPGB8i2_ViS9c9VxeAeqDqvH5R8E-M8HvWeNfAQ@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250422-0001-Introduce-pg_sequence_state-function-for-e.patchapplication/octet-stream; name=v20250422-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 5cadeaaaeecab8aa04895fab96f6049023c0fe60 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250422 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 +++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 111 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 574a544d9fa..be01319caf9 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..e542351b258 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 errmsg("permission denied for sequence %s",
+						RelationGetRelationName(seqrel))));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62beb71da28..8071134643c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..4bc21c7af95 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..23341a36caa 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250422-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250422-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 0ad9ee70d1b288084f416db92f8161453151dd0a Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250422 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 760 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 73f0c8d89fb..dcf1a68308f 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -118,16 +119,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +150,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -277,10 +288,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -296,8 +307,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -447,6 +459,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3c4268b271a..1c094d7d605 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19631,6 +19664,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 105e917aa7b..bd41c009215 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b426b5e4736..76aa26fa714 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -660,6 +660,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 6c03eca8e50..f953cad69ef 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3159,6 +3159,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index c916b9299a8..10dc03cd7cb 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3524,12 +3524,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e5879e00dff..74dad46568a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2342,6 +2342,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250422-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250422-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From a926473ca54a840eee7f008d89dbdaedc7832e08 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250422 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 188 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 233 insertions(+), 188 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..63174d0cdff
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..b57563773e2 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -179,8 +136,8 @@ finish_sync_worker(void)
  * Currently, this is used in the apply worker when transitioning from
  * CATCHUP state to SYNCDONE.
  */
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					WaitForRelationStateChange(rstate->relid,
+											   SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 5ce596f4576..f63d59f2036 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1028,7 +1028,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1150,7 +1150,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1206,7 +1206,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1272,7 +1272,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1407,7 +1407,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2249,7 +2249,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3726,7 +3726,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4787,7 +4787,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..a43a9b192bd 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+pg_noreturn extern void SyncFinishWorker(void);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
@@ -259,9 +262,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 74dad46568a..82af9d8a741 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2902,7 +2902,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250422-0004-Enhance-sequence-synchronization-during-su.patchapplication/octet-stream; name=v20250422-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 5c487be2f53f86a3dc2341014a752003f5bdb7d5 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 14 Apr 2025 09:19:07 +0530
Subject: [PATCH v20250422 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  27 +-
 src/backend/commands/subscriptioncmds.c       | 322 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 658 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 227 ++++++
 30 files changed, 1524 insertions(+), 179 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 15efb02badb..998fc05d7c2 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index e542351b258..f3d6abc7ad1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,11 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..2e7e86cffa9 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +920,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +949,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +974,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +991,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1009,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1055,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1495,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1510,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1551,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1570,33 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1608,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1621,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
 				break;
 			}
@@ -1750,7 +1868,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1891,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2205,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 1c094d7d605..d470c1cd2fa 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..ae8f5cb522a 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..e6a36b0bfca
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,658 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync worker,
+		 * and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, seqRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			seqRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data, lengthof(seqRow), seqRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch)
+		return InvalidXLogRecPtr;
+	else
+	{
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+		/* Return the LSN when the sequence state was set. */
+		return seq_page_lsn;
+	}
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends schema name and sequence name of sequences that have discrepancies
+ * between the publisher and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * We batch synchronize multiple sequences per transaction, because the
+ * alternative of synchronizing each sequence individually incurs overhead of
+ * starting and committing transactions repeatedly. On the other hand, we want
+ * to avoid keeping this batch transaction open for extended periods so it is
+ * currently limited to 100 sequences per batch.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+			{
+				sequence_sync_error = true;
+				report_mismatched_sequences(mismatched_seqs);
+			}
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Set the
+	 * failure time to prevent immediate initiation of the sequencesync worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 63174d0cdff..fa23621a0a8 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need to set a sequence failure time. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b57563773e2..bfced60976f 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ WaitForRelationStateChange(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f63d59f2036..8f9a5d88182 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -487,6 +487,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1027,7 +1032,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1149,7 +1157,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1205,7 +1216,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1271,7 +1285,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1406,7 +1423,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2248,7 +2268,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3725,7 +3748,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4638,8 +4664,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4718,6 +4744,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4737,14 +4767,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4789,6 +4822,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failuretime, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 60b12446a1c..52f4b579a44 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 56b6c368acf..5c5a775d40d 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index bd41c009215..95ce19a8843 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 76aa26fa714..b43c44e4b05 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -817,6 +817,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 10dc03cd7cb..6fddb5ea635 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8071134643c..a84fb506571 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12268,6 +12268,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..0c706bd9cd5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index a43a9b192bd..1e6e4088e28 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,24 +253,29 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failuretime(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 extern bool WaitForRelationStateChange(Oid relid, char expected_state);
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..2c4d1b78649 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..cf5904f3e06
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,227 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

v20250422-0005-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250422-0005-Documentation-for-sequence-synchronization.patchDownload
From 701ad00049fbea2e42b05cea20fd86799249a800 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250422 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  26 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 241 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 374 insertions(+), 42 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..0b0cc0d6893 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8207,12 +8210,17 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State code for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c1674c22cb2..daab5686b76 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5168,9 +5168,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5312,8 +5312,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5336,10 +5336,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..f355c7721e5 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES<literal>. Unlike tables, sequences allow users
+   to synchronize their current state at any given time. For more information,
+   refer to <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,201 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2314,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2644,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2456,8 +2658,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d768ea065c5..08d88c79687 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2025,8 +2025,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..c474e37c03e 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index b58c52ea50f..066a8c526db 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#206Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#205)
Re: Logical Replication of sequences

Hi Vignesh.

Review comments for patch v20250422-0001.

======
Commit message

1.
This patch introduces a new function: pg_sequence_state function
allows retrieval of sequence values including LSN.

SUGGESTION
This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.

======
src/backend/commands/sequence.c

pg_sequence_state:

2.
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied for sequence %s",
+ RelationGetRelationName(seqrel))));

Has redundant parentheses.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#207Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#205)
Re: Logical Replication of sequences

Hi Vignesh,

Some review comments for patch v20250422-0003.

======
src/backend/replication/logical/syncutils.c

1.
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)

Why does this have the pg_noreturn annotation? None of the other void
functions do.

~~~

2.
+bool
+FetchRelationStates(bool *started_tx)

All the functions in the sync utils.c are named like Syncxxx, so for
consistency, why not name this one also?
e.g. /FetchRelationStates/SyncFetchRelationStates/

======
src/backend/replication/logical/tablesync.c

3.
-static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+bool
+WaitForRelationStateChange(Oid relid, char expected_state)
 {
  char state;
~

3a.
Why isn't this static, like before?

~

3b.
If it is *only* for tables and nothing else, shouldn't it be static
and have a function name like 'wait_for_table_state_change' (not
_relation_)?
OTOH, if there is potential for this to be used for sequences in
future, then it should be in the syncutils.c module with a name like
'SyncWaitForRelationStateChange'.

======
src/include/replication/worker_internal.h

4.
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+pg_noreturn extern void SyncFinishWorker(void);

extern int logicalrep_sync_worker_count(Oid subid);

@@ -259,9 +262,13 @@ extern void
ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
extern bool AllTablesyncsReady(void);
extern void UpdateTwoPhaseState(Oid suboid, char new_state);

-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
- uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+ uint32 hashvalue);

~

4a.
Why does SyncFinishWorker have the pg_noreturn annotation? None of the
other void functions do.

~

4b.
I felt that all the SyncXXX functions exposed from syncutils.c should
be grouped together, and maybe even with a comment like /* from
syncutils.c */

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#208Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#205)
Re: Logical Replication of sequences

Hi Vignesh,

Some comments for v20250422-0004.

======
src/backend/commands/sequence.c

pg_sequence_state:

1.
+ * The page_lsn will be utilized in logical replication sequence
+ * synchronization to record the page_lsn of sequence in the
pg_subscription_rel
+ * system catalog. It will reflect the page_lsn of the remote sequence at the
+ * moment it was synchronized.
+ *

SUGGESTION (minor rewording)
The page LSN will be used in logical replication of sequences to
record the LSN of the sequence page in the pg_subscription_rel system
catalog. It reflects the LSN of the remote sequence at the time it
was synchronized.

======
src/backend/commands/subscriptioncmds.c

AlterSubscription:

2.
- case ALTER_SUBSCRIPTION_REFRESH:
+ case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+ {
+ if (!sub->enabled)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not
allowed for disabled subscriptions"));
+
+ PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ...
REFRESH PUBLICATION SEQUENCES");
+
+ AlterSubscription_refresh(sub, true, NULL, false, true, true);
+
+ break;
+ }
+
+ case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:

I felt these should be reordered so REFRESH PUBLICATION comes before
REFRESH PUBLICATION SEQUENCES. No particular reason, but AFAICT that
is how you've ordered them in all other places -- eg, gram.y, the
documentation, etc. -- so let's be consistent.

======
src/backend/replication/logical/launcher.c

3.
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failuretime(int code, Datum arg)

It might be better to call this function name
'logicalrep_seqsyncworker_failure' (not _failuretime) because it is
more generic, and in future, you might want to do more things in this
function apart from just setting the failure time.

======
src/backend/replication/logical/syncutils.c

SyncFinishWorker:

4.
+ /* This is a clean exit, so no need to set a sequence failure time. */
+ if (wtype == WORKERTYPE_SEQUENCESYNC)
+ cancel_before_shmem_exit(logicalrep_seqsyncworker_failuretime, 0);
+

I didn't think the comment should mention setting 'failure time'.
Those details belong at a lower level -- here, it is better to be more
generic.

SUGGESTION:
/* This is a clean exit, so no need for any sequence failure logic. */

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#209vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#207)
5 attachment(s)
Re: Logical Replication of sequences

On Thu, 24 Apr 2025 at 05:07, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Some review comments for patch v20250422-0003.

======
src/backend/replication/logical/syncutils.c

1.
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)

Why does this have the pg_noreturn annotation? None of the other void
functions do.

It indicates that the function will not return control flow to the
calling function after it finishes. This is not a new function, it was
just moved from tablesync.c

======
src/include/replication/worker_internal.h

4.
@@ -250,6 +252,7 @@ extern void logicalrep_worker_stop(Oid subid, Oid relid);
extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
+pg_noreturn extern void SyncFinishWorker(void);

extern int logicalrep_sync_worker_count(Oid subid);

@@ -259,9 +262,13 @@ extern void
ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
extern bool AllTablesyncsReady(void);
extern void UpdateTwoPhaseState(Oid suboid, char new_state);

-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
- uint32 hashvalue);
+extern bool FetchRelationStates(bool *started_tx);
+extern bool WaitForRelationStateChange(Oid relid, char expected_state);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+ uint32 hashvalue);

~

4a.
Why does SyncFinishWorker have the pg_noreturn annotation? None of the
other void functions do.

This is same as comment #1

The rest of the comments were fixed.
Also the comments from [1]/messages/by-id/CAHut+PticXRg4_W=d7H37DWPh5LNePbcQ5RKc3vUW5HCzAX_fg@mail.gmail.com and [2]/messages/by-id/CAHut+PtBhb89+1DAYUFc=1Ojkh1mHro+g3UCqMZAQoSpPQoqZA@mail.gmail.com are fixed in the attached v20250424 version.
[1]: /messages/by-id/CAHut+PticXRg4_W=d7H37DWPh5LNePbcQ5RKc3vUW5HCzAX_fg@mail.gmail.com
[2]: /messages/by-id/CAHut+PtBhb89+1DAYUFc=1Ojkh1mHro+g3UCqMZAQoSpPQoqZA@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250424-0001-Introduce-pg_sequence_state-function-for-e.patchapplication/octet-stream; name=v20250424-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 3af34bcb0eaa9feeca02356d0e3eec8ec3e715ac Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250424 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 +++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 111 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 574a544d9fa..be01319caf9 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..2e5b6cbecd1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62beb71da28..8071134643c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..4bc21c7af95 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..23341a36caa 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250424-0004-Enhance-sequence-synchronization-during-su.patchapplication/octet-stream; name=v20250424-0004-Enhance-sequence-synchronization-during-su.patchDownload
From f2a2657fd31e3133801ae61938ff3b272e44ed93 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 24 Apr 2025 14:12:02 +0530
Subject: [PATCH v20250424 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       | 322 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 658 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 227 ++++++
 30 files changed, 1523 insertions(+), 179 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 15efb02badb..998fc05d7c2 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 2e5b6cbecd1..b0f842e5e39 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1888,6 +1890,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 
 /*
  * Return the current on-disk state of the sequence.
+ * 
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
  *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..83be0bae062 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +920,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +949,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +974,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +991,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1009,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1055,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1495,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1510,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1551,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1570,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1594,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1607,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				break;
+			}
+
+			case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1750,7 +1868,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1891,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2205,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 1c094d7d605..d470c1cd2fa 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..fb3be0236de 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..e6a36b0bfca
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,658 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync worker,
+		 * and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, seqRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			seqRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data, lengthof(seqRow), seqRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch)
+		return InvalidXLogRecPtr;
+	else
+	{
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+		/* Return the LSN when the sequence state was set. */
+		return seq_page_lsn;
+	}
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends schema name and sequence name of sequences that have discrepancies
+ * between the publisher and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * We batch synchronize multiple sequences per transaction, because the
+ * alternative of synchronizing each sequence individually incurs overhead of
+ * starting and committing transactions repeatedly. On the other hand, we want
+ * to avoid keeping this batch transaction open for extended periods so it is
+ * currently limited to 100 sequences per batch.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+			{
+				sequence_sync_error = true;
+				report_mismatched_sequences(mismatched_seqs);
+			}
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Set the
+	 * failure time to prevent immediate initiation of the sequencesync worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 3d405ff2dc6..1d7d7543af5 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9bd51ceef48..688e5c85c47 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 765754bfc3c..1742968427a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1029,7 +1034,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1151,7 +1159,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1207,7 +1218,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1273,7 +1287,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1408,7 +1425,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2250,7 +2270,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3727,7 +3750,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4648,8 +4674,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4728,6 +4754,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4747,14 +4777,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4799,6 +4832,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 60b12446a1c..52f4b579a44 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 56b6c368acf..5c5a775d40d 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index bd41c009215..95ce19a8843 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 76aa26fa714..b43c44e4b05 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -817,6 +817,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 10dc03cd7cb..6fddb5ea635 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8071134643c..a84fb506571 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12268,6 +12268,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..0c706bd9cd5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 082e2b3d86c..7b6fe125b99 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..2c4d1b78649 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..cf5904f3e06
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,227 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

v20250424-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250424-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 0f1aa2e0f9c6ee8008502814559fa42f9f602449 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250424 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..3d405ff2dc6
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..9bd51ceef48 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 4151a4b2a96..765754bfc3c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1030,7 +1030,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1152,7 +1152,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1208,7 +1208,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1274,7 +1274,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1409,7 +1409,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2251,7 +2251,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3728,7 +3728,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4797,7 +4797,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..082e2b3d86c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void SyncFinishWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 74dad46568a..82af9d8a741 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2902,7 +2902,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250424-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250424-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From f49043a96f7a24909e7332ceb0faedfaa4f46af9 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250424 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 760 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 73f0c8d89fb..dcf1a68308f 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -118,16 +119,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +150,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -277,10 +288,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -296,8 +307,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -447,6 +459,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3c4268b271a..1c094d7d605 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19631,6 +19664,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 105e917aa7b..bd41c009215 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b426b5e4736..76aa26fa714 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -660,6 +660,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 6c03eca8e50..f953cad69ef 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3159,6 +3159,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index c916b9299a8..10dc03cd7cb 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3524,12 +3524,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e5879e00dff..74dad46568a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2342,6 +2342,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250424-0005-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250424-0005-Documentation-for-sequence-synchronization.patchDownload
From f2072f943e564c3bd44de4cdd54669411b394b20 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250424 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  26 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 241 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 374 insertions(+), 42 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..0b0cc0d6893 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8207,12 +8210,17 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State code for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c1674c22cb2..daab5686b76 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5168,9 +5168,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5312,8 +5312,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5336,10 +5336,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..ad27e6d1d80 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, sequences allow users
+   to synchronize their current state at any given time. For more information,
+   refer to <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,201 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2314,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2644,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2456,8 +2658,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d768ea065c5..08d88c79687 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2025,8 +2025,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..c474e37c03e 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index b58c52ea50f..066a8c526db 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#210Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#209)
Re: Logical Replication of sequences

Hi Vignesh.

FYI, patch v20250424-0004 reported whitespace errors when applied.

[postgres@CentOS7-x64 oss_postgres_misc]$ git apply
../patches_misc/v20250424-0004-Enhance-sequence-synchronization-during-su.patch
../patches_misc/v20250424-0004-Enhance-sequence-synchronization-during-su.patch:366:
trailing whitespace.
*
warning: 1 line adds whitespace errors.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#211Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#209)
Re: Logical Replication of sequences

Hi Vignesh.

Some review comments for v20250426-0005.

======
doc/src/sgml/catalogs.sgml

1.
       <para>
-       State code:
+       State code for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State code for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>

1a.
There should be an introductory sentence to say what this field is.
e.g. "State code for the table or sequence."

~

1b.
/State code for tables/State codes for tables/

~

1c.
/State code for sequences/State codes for sequences/

======
doc/src/sgml/logical-replication.sgml

2.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, sequences
allow users
+   to synchronize their current state at any given time. For more information,
+   refer to <xref linkend="logical-replication-sequences"/>.

This is OK, but maybe the "sequences allow users..." is worded
strangely. How about below?

SUGGESTION
Unlike tables, the current state of sequences may be synchronised at any time.

~~~

3.
+     Incremental sequence changes are not replicated.  The data in serial or
+     identity columns backed by sequences will of course be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.

Seems to be a missing word here

/The data in serial/Although the data in serial/

OR

Just change the punctuation to a semicolon.
/of the table,/of the table;/

======
doc/src/sgml/ref/alter_subscription.sgml

4.
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link
linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES</command></link>
+         </para>

4a.
Missing period in the last sentence.

~

4b.
AFAIK, when copy_data=false, then not only will *existing* sequences
not be synchronised, but even the *new* sequences will not be
synchronised. Effectively, when copy_data = false, then nothing at all
happens for sequences as far as what the user sees, right?

Experiment:

test_pub=# create publication pub1 for all sequences;
CREATE PUBLICATION

test_sub=# create sequence s1;
CREATE SEQUENCE
NOTICE: created replication slot "sub1" on publisher
CREATE SUBSCRIPTION

test_pub=# create sequence s1;
CREATE SEQUENCE
test_pub=# select * from nextval('s1');
nextval
---------
1
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
2
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
3
(1 row)

test_sub=# alter subscription sub1 refresh publication with (copy_data=false);
ALTER SUBSCRIPTION

test_sub=# select * from s1;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 0 | f
(1 row)

So, subscriber side s1 is unaffected.

Maybe it is not worth the effort, but doesn't this mean that you could
optimise the AlterSubscription_refresh() logic to completely skip all
processing for sequences when copy_data=false. e.g. what's the point
of gathering publisher sequence lists and setting INIT states for
them, etc, when it won't synchronise anything because copy_data=false?
Everything will be synchronised later anyway when the user does
REFRESH PUBLICATION SEQUENCES.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#212vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#211)
5 attachment(s)
Re: Logical Replication of sequences

On Sat, 26 Apr 2025 at 14:24, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh.

Some review comments for v20250426-0005.

4b.
AFAIK, when copy_data=false, then not only will *existing* sequences
not be synchronised, but even the *new* sequences will not be
synchronised. Effectively, when copy_data = false, then nothing at all
happens for sequences as far as what the user sees, right?

Experiment:

test_pub=# create publication pub1 for all sequences;
CREATE PUBLICATION

test_sub=# create sequence s1;
CREATE SEQUENCE
NOTICE: created replication slot "sub1" on publisher
CREATE SUBSCRIPTION

test_pub=# create sequence s1;
CREATE SEQUENCE
test_pub=# select * from nextval('s1');
nextval
---------
1
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
2
(1 row)

test_pub=# select * from nextval('s1');
nextval
---------
3
(1 row)

test_sub=# alter subscription sub1 refresh publication with (copy_data=false);
ALTER SUBSCRIPTION

test_sub=# select * from s1;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 0 | f
(1 row)

So, subscriber side s1 is unaffected.

Maybe it is not worth the effort, but doesn't this mean that you could
optimise the AlterSubscription_refresh() logic to completely skip all
processing for sequences when copy_data=false. e.g. what's the point
of gathering publisher sequence lists and setting INIT states for
them, etc, when it won't synchronise anything because copy_data=false?
Everything will be synchronised later anyway when the user does
REFRESH PUBLICATION SEQUENCES.

Currently this is in line with table behavior, I felt let's keep it
that way so that it will be easier to extend sequence replication as
use cases grow and will become more in sync with tables.

The rest of the comments were fixed, the attached v20250428 version
patch has the changes for the same.
Also the issue reported at [1]/messages/by-id/CAHut+PujY8Xd=T94zuPuF21s0dLRGVJaXgRnLbGE47pwSpo-YA@mail.gmail.com is available in the attached patch.
[1]: /messages/by-id/CAHut+PujY8Xd=T94zuPuF21s0dLRGVJaXgRnLbGE47pwSpo-YA@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250428-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250428-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 95fdfaee44c77f5d6d2ae275ecb143e365747c0c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250428 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 +++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 111 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 574a544d9fa..be01319caf9 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..2e5b6cbecd1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62beb71da28..8071134643c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..4bc21c7af95 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..23341a36caa 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250428-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250428-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 65099081b0486c5c117898cf82264fafc5152a65 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250428 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 760 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 73f0c8d89fb..dcf1a68308f 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -118,16 +119,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -159,6 +150,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -277,10 +288,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -296,8 +307,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -447,6 +459,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3c4268b271a..1c094d7d605 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19631,6 +19664,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 105e917aa7b..bd41c009215 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b426b5e4736..76aa26fa714 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -660,6 +660,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 6c03eca8e50..f953cad69ef 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3159,6 +3159,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index c916b9299a8..10dc03cd7cb 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3524,12 +3524,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e5879e00dff..74dad46568a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2342,6 +2342,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250428-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250428-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 7bf4955dcda7abc384847aef5f0af8075ce972ca Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250428 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..3d405ff2dc6
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..9bd51ceef48 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 4151a4b2a96..765754bfc3c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1030,7 +1030,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1152,7 +1152,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1208,7 +1208,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1274,7 +1274,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1409,7 +1409,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2251,7 +2251,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3728,7 +3728,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4797,7 +4797,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..082e2b3d86c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void SyncFinishWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 74dad46568a..82af9d8a741 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2902,7 +2902,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250428-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250428-0005-Documentation-for-sequence-synchronization.patchDownload
From 6e29935ac82e0c83cac01155e91b7d4f85a31948 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250428 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 241 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 377 insertions(+), 42 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..31bbfe08d00 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8207,12 +8210,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 14661ac2cc6..a4b77ea76ae 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5167,9 +5167,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5310,8 +5310,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5334,10 +5334,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..6644a28d255 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronised at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,201 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2314,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will of course be
+     replicated as part of the table, the sequences themselves do not replicate
+     ongoing changes. On the subscriber, a sequence will retain the last value
+     it synchronized from the publisher. If the subscriber is used as a
+     read-only database, then this should typically not be a problem.  If,
+     however, some kind of switchover or failover to the subscriber database is
+     intended, then the sequences would need to be updated to the latest
+     values, either by executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2644,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2456,8 +2658,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d768ea065c5..08d88c79687 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2025,8 +2025,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index b58c52ea50f..066a8c526db 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250428-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250428-0004-Enhance-sequence-synchronization-during-su.patchDownload
From ec9ee71d2923a62de80deb06e9696e73b350e210 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 28 Apr 2025 11:41:50 +0530
Subject: [PATCH v20250428 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       | 322 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 658 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 227 ++++++
 30 files changed, 1523 insertions(+), 179 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 15efb02badb..998fc05d7c2 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 2e5b6cbecd1..8c5c81818ca 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..83be0bae062 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +920,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +949,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +974,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +991,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1009,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1055,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1495,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1510,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1551,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1570,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1594,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1607,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				break;
+			}
+
+			case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1750,7 +1868,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1891,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2205,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 1c094d7d605..d470c1cd2fa 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..fb3be0236de 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..e6a36b0bfca
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,658 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync worker,
+		 * and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, seqRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			seqRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data, lengthof(seqRow), seqRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch)
+		return InvalidXLogRecPtr;
+	else
+	{
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+		/* Return the LSN when the sequence state was set. */
+		return seq_page_lsn;
+	}
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends schema name and sequence name of sequences that have discrepancies
+ * between the publisher and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * We batch synchronize multiple sequences per transaction, because the
+ * alternative of synchronizing each sequence individually incurs overhead of
+ * starting and committing transactions repeatedly. On the other hand, we want
+ * to avoid keeping this batch transaction open for extended periods so it is
+ * currently limited to 100 sequences per batch.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+			{
+				sequence_sync_error = true;
+				report_mismatched_sequences(mismatched_seqs);
+			}
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Set the
+	 * failure time to prevent immediate initiation of the sequencesync worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 3d405ff2dc6..1d7d7543af5 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9bd51ceef48..688e5c85c47 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 765754bfc3c..1742968427a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1029,7 +1034,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1151,7 +1159,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1207,7 +1218,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1273,7 +1287,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1408,7 +1425,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2250,7 +2270,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3727,7 +3750,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4648,8 +4674,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4728,6 +4754,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4747,14 +4777,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4799,6 +4832,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2f8cbd86759..c8779efe183 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 56b6c368acf..5c5a775d40d 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index bd41c009215..95ce19a8843 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 76aa26fa714..b43c44e4b05 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -817,6 +817,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 10dc03cd7cb..6fddb5ea635 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8071134643c..a84fb506571 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12268,6 +12268,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..0c706bd9cd5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 082e2b3d86c..7b6fe125b99 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..2c4d1b78649 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..cf5904f3e06
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,227 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

#213Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#212)
Re: Logical Replication of sequences

Hi Vignesh.

Some trivial review comments for DOCS patch v20250428-0005.

======
doc/src/sgml/logical-replication.sgml

1.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL
TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronised at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>

AFAIK the PostgreSQL documentation uses US spelling:

/synchronised/synchronized/

~~~

2.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will of course be
+     replicated as part of the table, the sequences themselves do not replicate
+     ongoing changes. On the subscriber, a sequence will retain the last value

I didn't think that you needed to say "of course" here.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#214vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#213)
5 attachment(s)
Re: Logical Replication of sequences

On Tue, 29 Apr 2025 at 06:15, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh.

Some trivial review comments for DOCS patch v20250428-0005.

======
doc/src/sgml/logical-replication.sgml

1.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL
TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronised at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
</para>

AFAIK the PostgreSQL documentation uses US spelling:

/synchronised/synchronized/

~~~

2.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will of course be
+     replicated as part of the table, the sequences themselves do not replicate
+     ongoing changes. On the subscriber, a sequence will retain the last value

I didn't think that you needed to say "of course" here.

Thanks for the comments, the updated patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250501-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250501-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 9882808f349f24a9eaefa8e4cb7cd19919f5b33a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250501 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 +++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 111 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index af3d056b992..9d1ae29441e 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..2e5b6cbecd1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62beb71da28..8071134643c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..4bc21c7af95 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..23341a36caa 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250501-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250501-0005-Documentation-for-sequence-synchronization.patchDownload
From a846e88e492288d8d8c0fb2e6c0adc09321f205d Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250501 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 241 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 377 insertions(+), 42 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..31bbfe08d00 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8207,12 +8210,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fd6e3e02890..290ea5ac457 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5168,9 +5168,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5311,8 +5311,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5335,10 +5335,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..43d63d6bace 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,201 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2314,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2644,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2456,8 +2658,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..cac2c8bf7e3 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index b58c52ea50f..066a8c526db 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250501-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250501-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From ad527778d2e6a3cba55cfb0b93c09739d85d73aa Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250501 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..3d405ff2dc6
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..9bd51ceef48 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 4151a4b2a96..765754bfc3c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1030,7 +1030,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1152,7 +1152,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1208,7 +1208,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1274,7 +1274,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1409,7 +1409,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2251,7 +2251,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3728,7 +3728,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4797,7 +4797,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..082e2b3d86c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void SyncFinishWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 74dad46568a..82af9d8a741 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2902,7 +2902,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250501-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250501-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 0ca1339397d0d606f80d79047acfae2316f9b1ea Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 28 Apr 2025 11:41:50 +0530
Subject: [PATCH v20250501 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       | 322 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 658 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |   4 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 227 ++++++
 30 files changed, 1523 insertions(+), 179 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 15efb02badb..998fc05d7c2 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 2e5b6cbecd1..8c5c81818ca 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..83be0bae062 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +920,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +949,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +974,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +991,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1009,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1055,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1495,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1510,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1551,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1570,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1594,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1607,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				break;
+			}
+
+			case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1750,7 +1868,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1891,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2205,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 1c094d7d605..d470c1cd2fa 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..fb3be0236de 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..e6a36b0bfca
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,658 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync worker,
+		 * and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * fetch_remote_sequence_data
+ *
+ * Retrieves sequence data (last_value, log_cnt, page_lsn, and is_called) and
+ * parameters (seqtypid, seqstart, seqincrement, seqmin, seqmax and seqcycle)
+ * from a remote node.
+ *
+ * Output Parameters:
+ * - log_cnt: The log count of the sequence.
+ * - is_called: Indicates if the sequence has been called.
+ * - page_lsn: The log sequence number of the sequence page.
+ * - last_value: The last value of the sequence.
+ *
+ * Returns:
+ * - TRUE if parameters match for the local and remote sequences.
+ * - FALSE if parameters differ for the local and remote sequences.
+ */
+static bool
+fetch_remote_sequence_data(WalReceiverConn *conn, Oid relid, Oid remoteid,
+						   char *nspname, char *relname, int64 *log_cnt,
+						   bool *is_called, XLogRecPtr *page_lsn,
+						   int64 *last_value)
+{
+#define REMOTE_SEQ_COL_COUNT 10
+	Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID, BOOLOID,
+	LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	bool		isnull;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqincrement;
+	int64		seqmin;
+	int64		seqmax;
+	bool		seqcycle;
+	bool		seq_params_match;
+	HeapTuple	tup;
+	Form_pg_sequence seqform;
+	int			col = 0;
+
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd,
+					 "SELECT last_value, log_cnt, is_called, page_lsn,\n"
+					 "seqtypid, seqstart, seqincrement, seqmin, seqmax, seqcycle\n"
+					 "FROM pg_sequence_state(%d), pg_sequence WHERE seqrelid = %d",
+					 remoteid, remoteid);
+
+	res = walrcv_exec(conn, cmd.data, REMOTE_SEQ_COL_COUNT, seqRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not fetch sequence info for sequence \"%s.%s\" from publisher: %s",
+						nspname, relname, res->err)));
+
+	/* Process the sequence. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+			 nspname, relname);
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	seq_params_match = seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement;
+
+	ReleaseSysCache(tup);
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seq_params_match;
+}
+
+/*
+ * Copy existing data of a sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'sequence_mismatch' indicates if a local/remote
+ * sequence parameter mismatch was detected.
+ */
+static XLogRecPtr
+copy_sequence(WalReceiverConn *conn, Relation rel, bool *sequence_mismatch)
+{
+	StringInfoData cmd;
+	int64		seq_last_value;
+	int64		seq_log_cnt;
+	bool		seq_is_called;
+	XLogRecPtr	seq_page_lsn = InvalidXLogRecPtr;
+	WalRcvExecResult *res;
+	Oid			seqRow[] = {OIDOID, CHAROID};
+	TupleTableSlot *slot;
+	LogicalRepRelId remoteid;	/* unique id of the relation */
+	char		relkind PG_USED_FOR_ASSERTS_ONLY;
+	bool		isnull;
+	char	   *nspname = get_namespace_name(RelationGetNamespace(rel));
+	char	   *relname = RelationGetRelationName(rel);
+	Oid			relid = RelationGetRelid(rel);
+
+	Assert(!*sequence_mismatch);
+
+	/* Fetch Oid. */
+	initStringInfo(&cmd);
+	appendStringInfo(&cmd, "SELECT c.oid, c.relkind\n"
+					 "FROM pg_catalog.pg_class c\n"
+					 "INNER JOIN pg_catalog.pg_namespace n\n"
+					 "  ON (c.relnamespace = n.oid)\n"
+					 "WHERE n.nspname = %s AND c.relname = %s",
+					 quote_literal_cstr(nspname),
+					 quote_literal_cstr(relname));
+
+	res = walrcv_exec(conn, cmd.data, lengthof(seqRow), seqRow);
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequence \"%s.%s\" info could not be fetched from publisher: %s",
+					   nspname, relname, res->err));
+
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	if (!tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" not found on publisher",
+					   nspname, relname));
+
+	remoteid = DatumGetObjectId(slot_getattr(slot, 1, &isnull));
+	Assert(!isnull);
+	relkind = DatumGetChar(slot_getattr(slot, 2, &isnull));
+	Assert(!isnull);
+	Assert(relkind == RELKIND_SEQUENCE);
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	*sequence_mismatch = !fetch_remote_sequence_data(conn, relid, remoteid,
+													 nspname, relname,
+													 &seq_log_cnt, &seq_is_called,
+													 &seq_page_lsn, &seq_last_value);
+
+	/* Update the sequence only if the parameters are identical. */
+	if (*sequence_mismatch)
+		return InvalidXLogRecPtr;
+	else
+	{
+		SetSequence(RelationGetRelid(rel), seq_last_value, seq_is_called,
+					seq_log_cnt);
+
+		/* Return the LSN when the sequence state was set. */
+		return seq_page_lsn;
+	}
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends schema name and sequence name of sequences that have discrepancies
+ * between the publisher and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs, Relation seqrel)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 get_namespace_name(RelationGetNamespace(seqrel)),
+					 RelationGetRelationName(seqrel));
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			curr_seq = 0;
+	int			seq_count;
+	int			curr_batch_seq = 0;
+	bool		start_txn = true;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+
+/*
+ * We batch synchronize multiple sequences per transaction, because the
+ * alternative of synchronizing each sequence individually incurs overhead of
+ * starting and committing transactions repeatedly. On the other hand, we want
+ * to avoid keeping this batch transaction open for extended periods so it is
+ * currently limited to 100 sequences per batch.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	*/
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+		XLogRecPtr	sequence_lsn;
+		bool		sequence_mismatch = false;
+
+		CHECK_FOR_INTERRUPTS();
+
+		if (start_txn)
+		{
+			StartTransactionCommand();
+			start_txn = false;
+		}
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   RelationGetRelationName(sequence_rel));
+
+		/*
+		 * In case sequence copy fails, throw a warning for the sequences that
+		 * did not match before exiting.
+		 */
+		PG_TRY();
+		{
+			sequence_lsn = copy_sequence(LogRepWorkerWalRcvConn, sequence_rel,
+										 &sequence_mismatch);
+		}
+		PG_CATCH();
+		{
+			report_mismatched_sequences(mismatched_seqs);
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		if (sequence_mismatch)
+			append_mismatched_sequences(mismatched_seqs, sequence_rel);
+		else
+			UpdateSubscriptionRelState(subid, seqinfo->relid,
+									   SUBREL_STATE_READY, sequence_lsn);
+
+		table_close(sequence_rel, NoLock);
+
+		curr_seq++;
+		curr_batch_seq++;
+
+		/*
+		 * Have we reached the end of the current batch of sequences, or last
+		 * remaining sequences to synchronize?
+		 */
+		if (curr_batch_seq == MAX_SEQUENCES_SYNC_PER_BATCH ||
+			curr_seq == seq_count)
+		{
+			if (message_level_is_interesting(DEBUG1))
+			{
+				/* LOG all the sequences synchronized during current batch. */
+				for (int i = 0; i < curr_batch_seq; i++)
+				{
+					SubscriptionRelState *done_seq;
+
+					done_seq = (SubscriptionRelState *) lfirst(list_nth_cell(sequences_not_synced,
+																			 (curr_seq - curr_batch_seq) + i));
+
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											get_rel_name(done_seq->relid)));
+				}
+			}
+
+			if (mismatched_seqs->len)
+			{
+				sequence_sync_error = true;
+				report_mismatched_sequences(mismatched_seqs);
+			}
+
+			ereport(LOG,
+					errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+						   curr_seq, seq_count, get_subscription_name(subid, false)));
+
+			/* Commit this batch, and prepare for next batch. */
+			CommitTransactionCommand();
+			start_txn = true;
+
+			/* Prepare for next batch */
+			curr_batch_seq = 0;
+		}
+	}
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Set the
+	 * failure time to prevent immediate initiation of the sequencesync worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 3d405ff2dc6..1d7d7543af5 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9bd51ceef48..688e5c85c47 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 765754bfc3c..1742968427a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1029,7 +1034,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1151,7 +1159,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1207,7 +1218,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1273,7 +1287,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1408,7 +1425,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2250,7 +2270,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3727,7 +3750,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4648,8 +4674,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4728,6 +4754,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4747,14 +4777,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4799,6 +4832,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2f8cbd86759..c8779efe183 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 3ba6cfe60f0..d409ea5f638 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 1f9bd58a4e2..e648adb8a0e 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -818,6 +818,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 10dc03cd7cb..6fddb5ea635 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8071134643c..a84fb506571 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12268,6 +12268,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..0c706bd9cd5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 082e2b3d86c..7b6fe125b99 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..2c4d1b78649 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..cf5904f3e06
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,227 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
-- 
2.43.0

v20250501-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250501-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 1a2764c78bdff354b43aefb8d02c0e63d9963298 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250501 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 760 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..fe6fb417f3d 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +121,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +152,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -279,10 +290,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +309,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +461,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3c4268b271a..1c094d7d605 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19631,6 +19664,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e2e7975b34e..3ba6cfe60f0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7417eab6aef..1f9bd58a4e2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -661,6 +661,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 55d892d9c16..0dda0b9a4be 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3214,6 +3214,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index c916b9299a8..10dc03cd7cb 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3524,12 +3524,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e5879e00dff..74dad46568a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2342,6 +2342,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

#215vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#214)
5 attachment(s)
Re: Logical Replication of sequences

On Thu, 1 May 2025 at 08:46, vignesh C <vignesh21@gmail.com> wrote:

On Tue, 29 Apr 2025 at 06:15, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh.

Some trivial review comments for DOCS patch v20250428-0005.

======
doc/src/sgml/logical-replication.sgml

1.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL
TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronised at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
</para>

AFAIK the PostgreSQL documentation uses US spelling:

/synchronised/synchronized/

~~~

2.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will of course be
+     replicated as part of the table, the sequences themselves do not replicate
+     ongoing changes. On the subscriber, a sequence will retain the last value

I didn't think that you needed to say "of course" here.

Thanks for the comments, the updated patch has the changes for the same.

There was one pending open comment #6 from [1]/messages/by-id/OSCPR01MB14966DA8CB749A0D4E9F3F7A7F50E2@OSCPR01MB14966.jpnprd01.prod.outlook.com. This has been
addressed in the attached patch.
[1]: /messages/by-id/OSCPR01MB14966DA8CB749A0D4E9F3F7A7F50E2@OSCPR01MB14966.jpnprd01.prod.outlook.com

Regards,
Vignesh

Attachments:

v20250503-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250503-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 39a705eb74bdc1580ea06d61fdb91699755f2a04 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250503 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 +++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 111 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index af3d056b992..9d1ae29441e 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..2e5b6cbecd1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	Oid			seq_relid = PG_GETARG_OID(0);
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62beb71da28..8071134643c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'regclass',
+  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,o,o,o,o}',
+  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..4bc21c7af95 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..23341a36caa 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250503-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250503-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From c1fceb64203468cfb4532e2117e4da7247a3b6ba Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250503 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..3d405ff2dc6
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..9bd51ceef48 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 4151a4b2a96..765754bfc3c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1030,7 +1030,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1152,7 +1152,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1208,7 +1208,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1274,7 +1274,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1409,7 +1409,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2251,7 +2251,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3728,7 +3728,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4797,7 +4797,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..082e2b3d86c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void SyncFinishWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 74dad46568a..82af9d8a741 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2902,7 +2902,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250503-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250503-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 85c093d80e3cf7763fdd3c0346c571445b6dd851 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250503 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 760 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..fe6fb417f3d 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +121,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +152,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -279,10 +290,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +309,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +461,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3c4268b271a..1c094d7d605 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19631,6 +19664,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e2e7975b34e..3ba6cfe60f0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7417eab6aef..1f9bd58a4e2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -661,6 +661,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 55d892d9c16..0dda0b9a4be 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3214,6 +3214,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index c916b9299a8..10dc03cd7cb 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3524,12 +3524,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e5879e00dff..74dad46568a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2342,6 +2342,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250503-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250503-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 38ef64095f33ad8c8dbf14a52117ee4ffa07fb10 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 28 Apr 2025 11:41:50 +0530
Subject: [PATCH v20250503 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Drop published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  38 +-
 src/backend/commands/subscriptioncmds.c       | 322 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 650 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |  13 +-
 src/include/catalog/pg_subscription_rel.h     |  11 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/sequence.out        |   2 +-
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/regress/sql/sequence.sql             |   2 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 227 ++++++
 src/tools/pgindent/typedefs.list              |   1 +
 33 files changed, 1540 insertions(+), 186 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 15efb02badb..998fc05d7c2 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 2e5b6cbecd1..c5659de91e3 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,13 +1891,19 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
 Datum
 pg_sequence_state(PG_FUNCTION_ARGS)
 {
-	Oid			seq_relid = PG_GETARG_OID(0);
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
 	SeqTable	elm;
 	Relation	seqrel;
 	Buffer		buf;
@@ -1917,6 +1925,14 @@ pg_sequence_state(PG_FUNCTION_ARGS)
 	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
 		elog(ERROR, "return type must be a row type");
 
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("logical replication sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
 	/* open and lock sequence */
 	init_sequence(seq_relid, &elm, &seqrel);
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..83be0bae062 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +920,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +949,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +974,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +991,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1009,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1055,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1495,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1510,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1551,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1570,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1594,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1607,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				break;
+			}
+
+			case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1750,7 +1868,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1891,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2205,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 1c094d7d605..d470c1cd2fa 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..fb3be0236de 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..d743b8982ae
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,650 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * Get the sequence object from the list of sequences.
+ */
+static LogicalRepSequenceInfo *
+get_sequence_obj(List *sequences, char *nspname, char *seqname)
+{
+	foreach_ptr(LogicalRepSequenceInfo, seqinfo, sequences)
+	{
+		if (!strcmp(seqinfo->nspname, nspname) &&
+			!strcmp(seqinfo->seqname, seqname))
+			return seqinfo;
+	}
+
+	return NULL;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends schema name and sequence name of sequences that have discrepancies
+ * between the publisher and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs,
+							LogicalRepSequenceInfo *seqinfo)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+}
+
+/*
+ * Copy existing data of sequnces from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'mismatched_seqs' will have the mismatched sequence
+ * names. The output parameter 'sequence_sync_error' indicates if any of the
+ * local/remote sequence parameter mismatch was detected.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid,
+			   StringInfo mismatched_seqs, bool *sequence_sync_error)
+{
+	int			seq_count = list_length(remotesequences);
+	int			curr_copied_seq = 0;
+
+/*
+ * We batch synchronize multiple sequences per transaction, because the
+ * alternative of synchronizing each sequence individually incurs overhead of
+ * starting and committing transactions repeatedly. On the other hand, we want
+ * to avoid keeping this batch transaction open for extended periods so it is
+ * currently limited to 100 sequences per batch.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (curr_copied_seq < seq_count)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfoData cmd;
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+		int			col = 0;
+		int			curr_batch_seq_count = MAX_SEQUENCES_SYNC_PER_BATCH;
+
+		StringInfo	seqstr = makeStringInfo();
+		bool		first = true;
+
+		StartTransactionCommand();
+
+		if ((seq_count - curr_copied_seq) < MAX_SEQUENCES_SYNC_PER_BATCH)
+			curr_batch_seq_count = seq_count - curr_copied_seq;
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < curr_batch_seq_count; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo;
+
+			seqinfo = (LogicalRepSequenceInfo *) lfirst(list_nth_cell(remotesequences,
+																	  curr_copied_seq + i));
+			if (first)
+				first = false;
+			else
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(&cmd);
+		appendStringInfo(&cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n",
+						 seqstr->data);
+
+		elog(LOG, "Executing query :%s", cmd.data);
+
+		res = walrcv_exec(conn, cmd.data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequences information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			bool		seq_params_match;
+			Form_pg_sequence seqform;
+			bool		isnull;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+			col = 0;
+
+			seqinfo = get_sequence_obj(remotesequences, nspname, seqname);
+			Assert(seqinfo);
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			seq_params_match = seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement;
+
+			ReleaseSysCache(tup);
+
+			/* Update the sequence only if the parameters are identical. */
+			if (seq_params_match)
+			{
+				SetSequence(seqinfo->localrelid, last_value, is_called, log_cnt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+			}
+			else
+			{
+				*sequence_sync_error = true;
+				append_mismatched_sequences(mismatched_seqs, seqinfo);
+			}
+
+			curr_copied_seq++;
+		}
+
+		if (message_level_is_interesting(DEBUG1))
+		{
+			/* LOG all the sequences synchronized during current batch. */
+			for (int i = 0; i < curr_batch_seq_count; i++)
+			{
+				LogicalRepSequenceInfo *done_seq;
+
+				done_seq = (LogicalRepSequenceInfo *) lfirst(list_nth_cell(remotesequences,
+																		   (curr_copied_seq - curr_batch_seq_count) + i));
+
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+										get_subscription_name(subid, false),
+										done_seq->seqname));
+			}
+		}
+
+		ereport(LOG,
+				errmsg("logical replication synchronized %d of %d sequences for subscription \"%s\" ",
+					   curr_copied_seq, seq_count, get_subscription_name(subid, false)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		pfree(cmd.data);
+
+		/* Commit this batch, and prepare for next batch. */
+		CommitTransactionCommand();
+	}
+
+	if (mismatched_seqs->len)
+	{
+		*sequence_sync_error = true;
+		report_mismatched_sequences(mismatched_seqs);
+	}
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+	List	   *remotesequences = NIL;
+	char	   *nspname;
+	char	   *seqname;
+	LogicalRepSequenceInfo *seq_info;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+
+	StartTransactionCommand();
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = seqinfo->relid;
+		remotesequences = lappend(remotesequences, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/*
+	 * In case sequence copy fails, throw a warning for the sequences that did
+	 * not match before exiting.
+	 */
+	PG_TRY();
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, remotesequences, subid,
+					   mismatched_seqs, &sequence_sync_error);
+	}
+	PG_CATCH();
+	{
+		report_mismatched_sequences(mismatched_seqs);
+		PG_RE_THROW();
+	}
+	PG_END_TRY();
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Set the
+	 * failure time to prevent immediate initiation of the sequencesync
+	 * worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(LOG,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization failed because the parameters between the publisher and subscriber do not match for all sequences"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 3d405ff2dc6..1d7d7543af5 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9bd51ceef48..688e5c85c47 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 765754bfc3c..1742968427a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1029,7 +1034,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1151,7 +1159,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1207,7 +1218,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1273,7 +1287,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1408,7 +1425,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2250,7 +2270,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3727,7 +3750,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4648,8 +4674,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4728,6 +4754,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4747,14 +4777,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4799,6 +4832,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2f8cbd86759..c8779efe183 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 3ba6cfe60f0..d409ea5f638 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 1f9bd58a4e2..e648adb8a0e 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -818,6 +818,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 10dc03cd7cb..6fddb5ea635 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8071134643c..bf0adbb9aa8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3436,10 +3436,10 @@
 { oid => '8051',
   descr => 'current on-disk sequence state',
   proname => 'pg_sequence_state', provolatile => 'v',
-  prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,pg_lsn,int8,int8,bool}',
-  proargmodes => '{i,o,o,o,o}',
-  proargnames => '{seq_oid,page_lsn,last_value,log_cnt,is_called}',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,seq_name,page_lsn,last_value,log_cnt,is_called}',
   prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
@@ -12268,6 +12268,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..ad71a3ca84f 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,13 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,6 +97,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 082e2b3d86c..7b6fe125b99 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..2c4d1b78649 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 4bc21c7af95..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,7 +161,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
-SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
  last_value | log_cnt | is_called 
 ------------+---------+-----------
           1 |       0 | f
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 23341a36caa..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,7 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
-SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('sequence_test');
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..cf5904f3e06
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,227 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 82af9d8a741..55381369260 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1622,6 +1622,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250503-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250503-0005-Documentation-for-sequence-synchronization.patchDownload
From 5bc44b55f6d339453ff5cc3c08b61cecc6b9e37b Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250503 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 241 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 377 insertions(+), 42 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..31bbfe08d00 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8207,12 +8210,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fd6e3e02890..290ea5ac457 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5168,9 +5168,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5311,8 +5311,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5335,10 +5335,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..43d63d6bace 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,201 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged if any differences
+     are detected.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2314,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2644,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2456,8 +2658,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..cac2c8bf7e3 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index b58c52ea50f..066a8c526db 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#216shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#215)
Re: Logical Replication of sequences

On Sat, May 3, 2025 at 7:27 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, the updated patch has the changes for the same.

Thanks for the patches. Please find few comments:

1)
patch004 commit msg:
- Drop published sequences are removed from pg_subscription_rel.

Drop -->Dropped

2)
copy_sequences:

LOG: Executing query :SELECT s.schname, s.seqname, ps.*, seq.seqtypid,
seq.seqstart, seq.seqincrement, seq.seqmin,
seq.seqmax, seq.seqcycle
FROM ( VALUES ('public', 'myseq1'), ('public', 'myseq3') ) AS s
(schname, seqname)
JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true
....

Do we really need to log this query? If so, shall it be DEBUG1/DEBUG2?

3)
In log, we get:

------------------
LOG: logical replication synchronized 9 of 9 sequences for subscription "sub1"
WARNING: parameters differ for the remote and local sequences
("public.myseq1", "public.myseq3") for subscription "sub1"

LOG: logical replication synchronized 2 of 2 sequences for subscription "sub1"
WARNING: parameters differ for the remote and local sequences
("public.myseq1", "public.myseq3") for subscription "sub1"
------------------

This is confusing. I have 9 sequences, out of which 2 are mismatched.
So on REFRESH I get the first message as 'synchronized 9 of 9' and
later when it attempts to resynchronize pending ones automatically, it
keeps on displaying 'synchronized 2 of 2'.

Can we mention something like below:
-----------------
Unsynchronized sequences: 9, attempted in this batch: 9, succedded: 7,
mismatched/failed:2

So that if it is more than 100, say 120, it will say:
Unsynchronized sequences: 120, attempted in this batch: 100,
succedded: 98, mismatched/failed:2
And in next batch:
Unsynchronized sequences: 120, attempted in this batch: 20, succedded:
20, mismatched:0

And while attempting to synchronize failed ones, it will say:
Unsynchronized sequences: 2, attempted in this batch: 2, succedded: 0,
mismatched:2
-----------------

Please feel free to change the words. The intent is to get a clear
picture on what is happening.

4)
Why in patch001, we have 'pg_sequence_state' with one argument while
in 4ht patch it is changed to 2 args? Is it intentional to have it the
current way in patch001?

5)
Section1:
<para>
A new <firstterm>sequence synchronization worker</firstterm> will be started
after executing any of the above subscriber commands, and will exit once the
sequences are synchronized.
</para>

Section2:
<sect2 id="sequence-definition-mismatches">
<title>Sequence Definition Mismatches</title>
<warning>
<para>
During sequence synchronization, the sequence definitions of the publisher
and the subscriber are compared. A WARNING is logged if any differences
are detected.
</para>
</warning>

None of the section mentions that the synchronization worker will keep
on attempting to synchronize the failed/mismtached sequences until the
differences are resolved (provided disable_on_error is not enabled).
I think we can mention such a thing briefly in
'sequence-definition-mismatches' section.

thanks
Shveta

#217Nisha Moond
nisha.moond412@gmail.com
In reply to: vignesh C (#215)
Re: Logical Replication of sequences

On Sat, May 3, 2025 at 7:28 PM vignesh C <vignesh21@gmail.com> wrote:

There was one pending open comment #6 from [1]. This has been
addressed in the attached patch.

Thank you for the patches, here are my comments for patch-004: sequencesync.c

copy_sequences()
-------------------
1)
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(seqstr, ", ");

We can avoid additional variable here, suggestion -
if (seqstr->len > 0)
appendStringInfoString(seqstr, ", ");
~~~~

2)
+ else
+ {
+ *sequence_sync_error = true;
+ append_mismatched_sequences(mismatched_seqs, seqinfo);
+ }

I think *sequence_sync_error = true can be removed from here, as we
can avoid setting it for every mismatch, as it is already set at the
end of the function if any sequence mismatches are found.
~~~~

3)
+ if (message_level_is_interesting(DEBUG1))
+ {
+ /* LOG all the sequences synchronized during current batch. */
+ for (int i = 0; i < curr_batch_seq_count; i++)
+ {
+ LogicalRepSequenceInfo *done_seq;
...
+
+ ereport(DEBUG1,
+ errmsg_internal("logical replication synchronization for
subscription \"%s\", sequence \"%s\" has finished",
+ get_subscription_name(subid, false),
+ done_seq->seqname));
+ }
+ }

3a) I think the DEBUG1 log can be moved inside the while loop just
above, to avoid traversing the list again unnecessarily.
~~~~

LogicalRepSyncSequences():
-----------------------------
4)
+ /*
+ * Sequence synchronization failed due to a parameter mismatch. Set the
+ * failure time to prevent immediate initiation of the sequencesync
+ * worker.
+ */
+ if (sequence_sync_error)
+ {
+ logicalrep_seqsyncworker_set_failuretime();
+ ereport(LOG,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("sequence synchronization failed because the parameters
between the publisher and subscriber do not match for all
sequences"));
+ }

I think saying "sequence synchronization failed" could be misleading,
as the matched sequences will still be synced successfully. It might
be clearer to reword it to something like:
"sequence synchronization failed for some sequences because the
parameters between the publisher and subscriber do not match."
~~~~

--
Thanks,
Nisha

#218vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#216)
5 attachment(s)
Re: Logical Replication of sequences

On Fri, 9 May 2025 at 14:28, shveta malik <shveta.malik@gmail.com> wrote:

On Sat, May 3, 2025 at 7:27 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, the updated patch has the changes for the same.

Thanks for the patches. Please find few comments:

1)
patch004 commit msg:
- Drop published sequences are removed from pg_subscription_rel.

Drop -->Dropped

Modified

2)
copy_sequences:

LOG: Executing query :SELECT s.schname, s.seqname, ps.*, seq.seqtypid,
seq.seqstart, seq.seqincrement, seq.seqmin,
seq.seqmax, seq.seqcycle
FROM ( VALUES ('public', 'myseq1'), ('public', 'myseq3') ) AS s
(schname, seqname)
JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true
....

Do we really need to log this query? If so, shall it be DEBUG1/DEBUG2?

This is not required, removed it

3)
In log, we get:

------------------
LOG: logical replication synchronized 9 of 9 sequences for subscription "sub1"
WARNING: parameters differ for the remote and local sequences
("public.myseq1", "public.myseq3") for subscription "sub1"

LOG: logical replication synchronized 2 of 2 sequences for subscription "sub1"
WARNING: parameters differ for the remote and local sequences
("public.myseq1", "public.myseq3") for subscription "sub1"
------------------

This is confusing. I have 9 sequences, out of which 2 are mismatched.
So on REFRESH I get the first message as 'synchronized 9 of 9' and
later when it attempts to resynchronize pending ones automatically, it
keeps on displaying 'synchronized 2 of 2'.

Can we mention something like below:
-----------------
Unsynchronized sequences: 9, attempted in this batch: 9, succedded: 7,
mismatched/failed:2

So that if it is more than 100, say 120, it will say:
Unsynchronized sequences: 120, attempted in this batch: 100,
succedded: 98, mismatched/failed:2
And in next batch:
Unsynchronized sequences: 120, attempted in this batch: 20, succedded:
20, mismatched:0

And while attempting to synchronize failed ones, it will say:
Unsynchronized sequences: 2, attempted in this batch: 2, succedded: 0,
mismatched:2
-----------------

Please feel free to change the words. The intent is to get a clear
picture on what is happening.

Fixed this

4)
Why in patch001, we have 'pg_sequence_state' with one argument while
in 4ht patch it is changed to 2 args? Is it intentional to have it the
current way in patch001?

This should have been in 001 itself, moved these changes to 001 patch

5)
Section1:
<para>
A new <firstterm>sequence synchronization worker</firstterm> will be started
after executing any of the above subscriber commands, and will exit once the
sequences are synchronized.
</para>

Section2:
<sect2 id="sequence-definition-mismatches">
<title>Sequence Definition Mismatches</title>
<warning>
<para>
During sequence synchronization, the sequence definitions of the publisher
and the subscriber are compared. A WARNING is logged if any differences
are detected.
</para>
</warning>

None of the section mentions that the synchronization worker will keep
on attempting to synchronize the failed/mismtached sequences until the
differences are resolved (provided disable_on_error is not enabled).
I think we can mention such a thing briefly in
'sequence-definition-mismatches' section.

Modified

The attached v20250514 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250514-0001-Introduce-pg_sequence_state-function-for-e.patchapplication/octet-stream; name=v20250514-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 4022d4c99d97ac43929b48d9b8fe9d2605663fdc Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250514 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index b405525a465..7423f2413a3 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..01cd0e07fc2 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62beb71da28..ed15ac697ce 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,seq_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250514-0005-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250514-0005-Documentation-for-sequence-synchronization.patchDownload
From 6ed57d38e44c2a7496b6639d0835f6d902e4ae50 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250514 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 244 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 380 insertions(+), 42 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..31bbfe08d00 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8207,12 +8210,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 23d2b1be424..334c47e0034 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5168,9 +5168,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5311,8 +5311,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5335,10 +5335,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..bff3ea8bdc5 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,204 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged listing all differing
+     sequences before the process exits. The apply worker detects the failure
+     and repeatedly respawns the sequence synchronization worker to continue
+     the synchronization process until all differences are resolved. See also
+     <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2317,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2647,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2456,8 +2661,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..cac2c8bf7e3 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index b58c52ea50f..066a8c526db 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250514-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250514-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From f793905b81f1611cfae525658e6ea2ff5994e54e Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250514 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..3d405ff2dc6
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..9bd51ceef48 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 4151a4b2a96..765754bfc3c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1030,7 +1030,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1152,7 +1152,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1208,7 +1208,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1274,7 +1274,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1409,7 +1409,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2251,7 +2251,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3728,7 +3728,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4797,7 +4797,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..082e2b3d86c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void SyncFinishWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d3e32001b20..ac0eb8ef27a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2903,7 +2903,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250514-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250514-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 6de40e329ad8095b324c79cb431826c1420d2104 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250514 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 760 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..fe6fb417f3d 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +121,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +152,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -279,10 +290,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +309,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +461,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3c4268b271a..1c094d7d605 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19631,6 +19664,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e2e7975b34e..3ba6cfe60f0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7417eab6aef..1f9bd58a4e2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -661,6 +661,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 55d892d9c16..0dda0b9a4be 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3214,6 +3214,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec65ab79fec..3dc84074e63 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3530,12 +3530,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9ea573fae21..d3e32001b20 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2343,6 +2343,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250514-0004-Enhance-sequence-synchronization-during-su.patchapplication/octet-stream; name=v20250514-0004-Enhance-sequence-synchronization-during-su.patchDownload
From ccf1298db2545f89c5cbd8d3ddd851608f3c3072 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 13 May 2025 21:11:17 +0530
Subject: [PATCH v20250514 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  28 +-
 src/backend/commands/subscriptioncmds.c       | 322 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 638 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |  11 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 227 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 31 files changed, 1512 insertions(+), 180 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 15efb02badb..998fc05d7c2 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 01cd0e07fc2..c5659de91e3 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
@@ -1924,7 +1930,7 @@ pg_sequence_state(PG_FUNCTION_ARGS)
 	if (!OidIsValid(seq_relid))
 		ereport(ERROR,
 				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				errmsg("sequence \"%s.%s\" does not exist",
+				errmsg("logical replication sequence \"%s.%s\" does not exist",
 					   schema_name, sequence_name));
 
 	/* open and lock sequence */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..83be0bae062 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +920,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +949,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +974,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +991,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1009,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1055,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1495,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1510,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1551,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1570,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1594,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1607,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				break;
+			}
+
+			case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1750,7 +1868,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1891,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2205,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 1c094d7d605..d470c1cd2fa 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..fb3be0236de 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..c986acc32b8
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,638 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * Get the sequence object from the list of sequences.
+ */
+static LogicalRepSequenceInfo *
+get_sequence_obj(List *sequences, char *nspname, char *seqname)
+{
+	foreach_ptr(LogicalRepSequenceInfo, seqinfo, sequences)
+	{
+		if (!strcmp(seqinfo->nspname, nspname) &&
+			!strcmp(seqinfo->seqname, seqname))
+			return seqinfo;
+	}
+
+	return NULL;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends schema name and sequence name of sequences that have discrepancies
+ * between the publisher and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs,
+							LogicalRepSequenceInfo *seqinfo)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+}
+
+/*
+ * Copy existing data of sequnces from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'mismatched_seqs' will have the mismatched sequence
+ * names. The output parameter 'sequence_sync_error' indicates if any of the
+ * local/remote sequence parameter mismatch was detected.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid,
+			   StringInfo mismatched_seqs, bool *sequence_sync_error)
+{
+	int			total_seq = list_length(remotesequences);
+	int			curr_seq = 0;
+
+/*
+ * We batch synchronize multiple sequences per transaction, because the
+ * alternative of synchronizing each sequence individually incurs overhead of
+ * starting and committing transactions repeatedly. On the other hand, we want
+ * to avoid keeping this batch transaction open for extended periods so it is
+ * currently limited to 100 sequences per batch.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (curr_seq < total_seq)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfoData cmd;
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+		int			col = 0;
+		int			batch_seq_count = MAX_SEQUENCES_SYNC_PER_BATCH;
+		int			batch_success_count = 0;
+		int			batch_mismatch_count = 0;
+
+		StringInfo	seqstr = makeStringInfo();
+
+		StartTransactionCommand();
+
+		if ((total_seq - curr_seq) < MAX_SEQUENCES_SYNC_PER_BATCH)
+			batch_seq_count = total_seq - curr_seq;
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_seq_count; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo;
+
+			seqinfo = (LogicalRepSequenceInfo *) lfirst(list_nth_cell(remotesequences,
+																	  curr_seq + i));
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(&cmd);
+		appendStringInfo(&cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd.data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequences information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			bool		seq_params_match;
+			Form_pg_sequence seqform;
+			bool		isnull;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+			col = 0;
+
+			seqinfo = get_sequence_obj(remotesequences, nspname, seqname);
+			Assert(seqinfo);
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			seq_params_match = seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement;
+
+			ReleaseSysCache(tup);
+
+			/* Update the sequence only if the parameters are identical. */
+			if (seq_params_match)
+			{
+				SetSequence(seqinfo->localrelid, last_value, is_called, log_cnt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											seqinfo->seqname));
+
+				batch_success_count++;
+			}
+			else
+			{
+				append_mismatched_sequences(mismatched_seqs, seqinfo);
+				batch_mismatch_count++;
+			}
+
+			curr_seq++;
+		}
+
+		ereport(LOG,
+				errmsg("Logical replication sequence synchronization - total unsynchronized: %d, attempted in this batch: %d; succeeded in this batch: %d; mismatched in this batch: %d for subscription: \"%s\"",
+					   total_seq, batch_seq_count, batch_success_count,
+					   batch_mismatch_count, get_subscription_name(subid, false)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		pfree(cmd.data);
+
+		/* Commit this batch, and prepare for next batch. */
+		CommitTransactionCommand();
+	}
+
+	if (mismatched_seqs->len)
+	{
+		*sequence_sync_error = true;
+		report_mismatched_sequences(mismatched_seqs);
+	}
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+	List	   *remotesequences = NIL;
+	char	   *nspname;
+	char	   *seqname;
+	LogicalRepSequenceInfo *seq_info;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+
+	StartTransactionCommand();
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = seqinfo->relid;
+		remotesequences = lappend(remotesequences, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/*
+	 * In case sequence copy fails, throw a warning for the sequences that did
+	 * not match before exiting.
+	 */
+	PG_TRY();
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, remotesequences, subid,
+					   mismatched_seqs, &sequence_sync_error);
+	}
+	PG_CATCH();
+	{
+		report_mismatched_sequences(mismatched_seqs);
+		PG_RE_THROW();
+	}
+	PG_END_TRY();
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Set the
+	 * failure time to prevent immediate initiation of the sequencesync
+	 * worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization worker failed: one or more sequences have mismatched parameters between the publisher and subscriber"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 3d405ff2dc6..1d7d7543af5 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9bd51ceef48..688e5c85c47 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 765754bfc3c..1742968427a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1029,7 +1034,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1151,7 +1159,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1207,7 +1218,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1273,7 +1287,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1408,7 +1425,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2250,7 +2270,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3727,7 +3750,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4648,8 +4674,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4728,6 +4754,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4747,14 +4777,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4799,6 +4832,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2f8cbd86759..c8779efe183 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 3ba6cfe60f0..d409ea5f638 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 1f9bd58a4e2..e648adb8a0e 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -818,6 +818,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 3dc84074e63..1206c515a0a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index ed15ac697ce..bf0adbb9aa8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12268,6 +12268,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..ad71a3ca84f 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,13 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,6 +97,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 082e2b3d86c..7b6fe125b99 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..2c4d1b78649 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..cf5904f3e06
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,227 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ac0eb8ef27a..b27db84589f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1623,6 +1623,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

#219vignesh C
vignesh21@gmail.com
In reply to: Nisha Moond (#217)
Re: Logical Replication of sequences

On Wed, 14 May 2025 at 09:55, Nisha Moond <nisha.moond412@gmail.com> wrote:

On Sat, May 3, 2025 at 7:28 PM vignesh C <vignesh21@gmail.com> wrote:

There was one pending open comment #6 from [1]. This has been
addressed in the attached patch.

Thank you for the patches, here are my comments for patch-004: sequencesync.c

copy_sequences()
-------------------
1)
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(seqstr, ", ");

We can avoid additional variable here, suggestion -
if (seqstr->len > 0)
appendStringInfoString(seqstr, ", ");

Modified

2)
+ else
+ {
+ *sequence_sync_error = true;
+ append_mismatched_sequences(mismatched_seqs, seqinfo);
+ }

I think *sequence_sync_error = true can be removed from here, as we
can avoid setting it for every mismatch, as it is already set at the
end of the function if any sequence mismatches are found.

Modified

3)
+ if (message_level_is_interesting(DEBUG1))
+ {
+ /* LOG all the sequences synchronized during current batch. */
+ for (int i = 0; i < curr_batch_seq_count; i++)
+ {
+ LogicalRepSequenceInfo *done_seq;
...
+
+ ereport(DEBUG1,
+ errmsg_internal("logical replication synchronization for
subscription \"%s\", sequence \"%s\" has finished",
+ get_subscription_name(subid, false),
+ done_seq->seqname));
+ }
+ }

3a) I think the DEBUG1 log can be moved inside the while loop just
above, to avoid traversing the list again unnecessarily.

Modified

LogicalRepSyncSequences():
-----------------------------
4)
+ /*
+ * Sequence synchronization failed due to a parameter mismatch. Set the
+ * failure time to prevent immediate initiation of the sequencesync
+ * worker.
+ */
+ if (sequence_sync_error)
+ {
+ logicalrep_seqsyncworker_set_failuretime();
+ ereport(LOG,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("sequence synchronization failed because the parameters
between the publisher and subscriber do not match for all
sequences"));
+ }

I think saying "sequence synchronization failed" could be misleading,
as the matched sequences will still be synced successfully. It might
be clearer to reword it to something like:
"sequence synchronization failed for some sequences because the
parameters between the publisher and subscriber do not match."

Modified

The comments are fixed in the v20250514 version patch attached at [1]/messages/by-id/CALDaNm3GXa-kKTe3oqmKA8oniHvZfgYUXG8mVczv4GJzFwG7bg@mail.gmail.com.

[1]: /messages/by-id/CALDaNm3GXa-kKTe3oqmKA8oniHvZfgYUXG8mVczv4GJzFwG7bg@mail.gmail.com

Regards,
Vignesh

#220Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#218)
Re: Logical Replication of sequences

Hi Vignesh.

Some minor review comments for the patches in set v20250514.

======

Patch 0001.

1.1
For function 'pg_sequence_state', the DOCS call the 2nd parameter
'sequence_name', but the pg_proc.dat file calls it 'seq_name'. Should
these be made the same?

////////////////////

Patch 0004.

pg_sequence_state:

4.1

- errmsg("sequence \"%s.%s\" does not exist",
+ errmsg("logical replication sequence \"%s.%s\" does not exist",

Why isn't this change already be done in an early patch when this
function was first implemented?

~~~

copy_sequences:

4.2

+/*
+ * Copy existing data of sequnces from the publisher.
+ *

Typo: "sequnces"

~~~

4.3
+{
+ int total_seq = list_length(remotesequences);
+ int curr_seq = 0;
+
+/*
+ * We batch synchronize multiple sequences per transaction, because the
+ * alternative of synchronizing each sequence individually incurs overhead of
+ * starting and committing transactions repeatedly. On the other hand, we want
+ * to avoid keeping this batch transaction open for extended periods so it is
+ * currently limited to 100 sequences per batch.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100

Wrong indent for block comment.

~~~

4.4
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not receive list of sequences information from the
publisher: %s",
+    res->err));

Should this say /sequences information/sequence information/

~~~

4.5
+ ereport(LOG,
+ errmsg("Logical replication sequence synchronization - total
unsynchronized: %d, attempted in this batch: %d; succeeded in this
batch: %d; mismatched in this batch: %d for subscription: \"%s\"",
+    total_seq, batch_seq_count, batch_success_count,
+    batch_mismatch_count, get_subscription_name(subid, false)));
+

This errmsg seems backwards. I think it should be expressed like the
other one immediately above. Also I think the information can be made
shorter -- e.g. no need to say "in this batch" multiple times.

SUGGESTION
"Logical replication sequence synchronization for subscription \"%s\"
- total unsynchronized: %d; batch #%d = %d attempted, %d succeeded, %d
mismatched"

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#221vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#220)
5 attachment(s)
Re: Logical Replication of sequences

On Thu, 15 May 2025 at 12:27, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh.

Some minor review comments for the patches in set v20250514.

======

Patch 0001.

1.1
For function 'pg_sequence_state', the DOCS call the 2nd parameter
'sequence_name', but the pg_proc.dat file calls it 'seq_name'. Should
these be made the same?

////////////////////

Patch 0004.

pg_sequence_state:

4.1

- errmsg("sequence \"%s.%s\" does not exist",
+ errmsg("logical replication sequence \"%s.%s\" does not exist",

Why isn't this change already be done in an early patch when this
function was first implemented?

~~~

copy_sequences:

4.2

+/*
+ * Copy existing data of sequnces from the publisher.
+ *

Typo: "sequnces"

~~~

4.3
+{
+ int total_seq = list_length(remotesequences);
+ int curr_seq = 0;
+
+/*
+ * We batch synchronize multiple sequences per transaction, because the
+ * alternative of synchronizing each sequence individually incurs overhead of
+ * starting and committing transactions repeatedly. On the other hand, we want
+ * to avoid keeping this batch transaction open for extended periods so it is
+ * currently limited to 100 sequences per batch.
+ */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100

Wrong indent for block comment.

~~~

4.4
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not receive list of sequences information from the
publisher: %s",
+    res->err));

Should this say /sequences information/sequence information/

~~~

4.5
+ ereport(LOG,
+ errmsg("Logical replication sequence synchronization - total
unsynchronized: %d, attempted in this batch: %d; succeeded in this
batch: %d; mismatched in this batch: %d for subscription: \"%s\"",
+    total_seq, batch_seq_count, batch_success_count,
+    batch_mismatch_count, get_subscription_name(subid, false)));
+

This errmsg seems backwards. I think it should be expressed like the
other one immediately above. Also I think the information can be made
shorter -- e.g. no need to say "in this batch" multiple times.

SUGGESTION
"Logical replication sequence synchronization for subscription \"%s\"
- total unsynchronized: %d; batch #%d = %d attempted, %d succeeded, %d
mismatched"

Thanks for the comments, these are handled in the attached v20250516
version patch.

Regards,
Vignesh

Attachments:

v20250516-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250516-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From fa79619a5dae916373db96127db4a102af49896e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250516 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index b405525a465..7423f2413a3 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..01cd0e07fc2 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62beb71da28..ff14256c0ab 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250516-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250516-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 59de9ba057ac19f629e666721caef6abf7c2d8f1 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250516 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 760 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..fe6fb417f3d 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +121,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +152,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -279,10 +290,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +309,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +461,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 0b5652071d1..3d7b9bec86c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19638,6 +19671,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e2e7975b34e..3ba6cfe60f0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7417eab6aef..1f9bd58a4e2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -661,6 +661,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 55d892d9c16..0dda0b9a4be 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3214,6 +3214,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec65ab79fec..3dc84074e63 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3530,12 +3530,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9ea573fae21..d3e32001b20 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2343,6 +2343,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250516-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250516-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 57cef198446c8f381afd48190b0bd95acb78032c Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250516 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..3d405ff2dc6
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..9bd51ceef48 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 4151a4b2a96..765754bfc3c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1030,7 +1030,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1152,7 +1152,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1208,7 +1208,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1274,7 +1274,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1409,7 +1409,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2251,7 +2251,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3728,7 +3728,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4797,7 +4797,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..082e2b3d86c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void SyncFinishWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d3e32001b20..ac0eb8ef27a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2903,7 +2903,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250516-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250516-0004-Enhance-sequence-synchronization-during-su.patchDownload
From b34268332809a8fb205085f63d50ae80a470023e Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 13 May 2025 21:11:17 +0530
Subject: [PATCH v20250516 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  10 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       | 322 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 642 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  45 +-
 src/backend/replication/logical/worker.c      |  58 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |   5 +
 src/include/catalog/pg_subscription_rel.h     |  11 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |   8 +
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 227 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 31 files changed, 1515 insertions(+), 179 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 15efb02badb..998fc05d7c2 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 01cd0e07fc2..8bbacd21ce9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..83be0bae062 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +920,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +949,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +974,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +991,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1009,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1055,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1495,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1510,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1551,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1570,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1594,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1607,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				break;
+			}
+
+			case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1750,7 +1868,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1891,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2205,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3d7b9bec86c..34c52747d2f 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..fb3be0236de 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..dd115e01801
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,642 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * Get the sequence object from the list of sequences.
+ */
+static LogicalRepSequenceInfo *
+get_sequence_obj(List *sequences, char *nspname, char *seqname)
+{
+	foreach_ptr(LogicalRepSequenceInfo, seqinfo, sequences)
+	{
+		if (!strcmp(seqinfo->nspname, nspname) &&
+			!strcmp(seqinfo->seqname, seqname))
+			return seqinfo;
+	}
+
+	return NULL;
+}
+
+/*
+ * report_mismatched_sequences
+ *
+ * Report any sequence mismatches as a single warning log.
+ */
+static void
+report_mismatched_sequences(StringInfo mismatched_seqs)
+{
+	if (mismatched_seqs->len)
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("parameters differ for the remote and local sequences (%s) for subscription \"%s\"",
+					   mismatched_seqs->data, MySubscription->name),
+				errhint("Alter/Re-create local sequences to have the same parameters as the remote sequences."));
+
+		resetStringInfo(mismatched_seqs);
+	}
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends schema name and sequence name of sequences that have discrepancies
+ * between the publisher and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs,
+							LogicalRepSequenceInfo *seqinfo)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ *
+ * The output parameter 'mismatched_seqs' will have the mismatched sequence
+ * names. The output parameter 'sequence_sync_error' indicates if any of the
+ * local/remote sequence parameter mismatch was detected.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid,
+			   StringInfo mismatched_seqs, bool *sequence_sync_error)
+{
+	int			total_seq = list_length(remotesequences);
+	int			curr_seq = 0;
+	int			curr_batch = 1;
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it iscurrently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (curr_seq < total_seq)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfoData cmd;
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+		int			col = 0;
+		int			batch_seq_count = MAX_SEQUENCES_SYNC_PER_BATCH;
+		int			batch_success_count = 0;
+		int			batch_mismatch_count = 0;
+
+		StringInfo	seqstr = makeStringInfo();
+
+		StartTransactionCommand();
+
+		if ((total_seq - curr_seq) < MAX_SEQUENCES_SYNC_PER_BATCH)
+			batch_seq_count = total_seq - curr_seq;
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_seq_count; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo;
+
+			seqinfo = (LogicalRepSequenceInfo *) lfirst(list_nth_cell(remotesequences,
+																	  curr_seq + i));
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(&cmd);
+		appendStringInfo(&cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd.data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			bool		seq_params_match;
+			Form_pg_sequence seqform;
+			bool		isnull;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+			col = 0;
+
+			seqinfo = get_sequence_obj(remotesequences, nspname, seqname);
+			Assert(seqinfo);
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			seq_params_match = seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement;
+
+			ReleaseSysCache(tup);
+
+			/* Update the sequence only if the parameters are identical. */
+			if (seq_params_match)
+			{
+				SetSequence(seqinfo->localrelid, last_value, is_called, log_cnt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											seqinfo->seqname));
+
+				batch_success_count++;
+			}
+			else
+			{
+				append_mismatched_sequences(mismatched_seqs, seqinfo);
+				batch_mismatch_count++;
+			}
+
+			curr_seq++;
+		}
+
+		ereport(LOG,
+				errmsg("Logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d; batch #%d = %d attempted, %d succeeded, %d mismatched",
+					   get_subscription_name(subid, false), total_seq,
+					   curr_batch, batch_seq_count, batch_success_count,
+					   batch_mismatch_count));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		pfree(cmd.data);
+
+		/* Commit this batch, and prepare for next batch. */
+		CommitTransactionCommand();
+
+		curr_batch++;
+	}
+
+	if (mismatched_seqs->len)
+	{
+		*sequence_sync_error = true;
+		report_mismatched_sequences(mismatched_seqs);
+	}
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	bool		sequence_sync_error = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfoData app_name;
+	List	   *remotesequences = NIL;
+	char	   *nspname;
+	char	   *seqname;
+	LogicalRepSequenceInfo *seq_info;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+
+	StartTransactionCommand();
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = seqinfo->relid;
+		remotesequences = lappend(remotesequences, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/*
+	 * In case sequence copy fails, throw a warning for the sequences that did
+	 * not match before exiting.
+	 */
+	PG_TRY();
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, remotesequences, subid,
+					   mismatched_seqs, &sequence_sync_error);
+	}
+	PG_CATCH();
+	{
+		report_mismatched_sequences(mismatched_seqs);
+		PG_RE_THROW();
+	}
+	PG_END_TRY();
+
+	/*
+	 * Sequence synchronization failed due to a parameter mismatch. Set the
+	 * failure time to prevent immediate initiation of the sequencesync
+	 * worker.
+	 */
+	if (sequence_sync_error)
+	{
+		logicalrep_seqsyncworker_set_failuretime();
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence synchronization worker failed: one or more sequences have mismatched parameters between the publisher and subscriber"));
+	}
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid, false);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 3d405ff2dc6..1d7d7543af5 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9bd51ceef48..688e5c85c47 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1561,7 +1561,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1583,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 765754bfc3c..1742968427a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1029,7 +1034,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1151,7 +1159,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1207,7 +1218,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1273,7 +1287,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1408,7 +1425,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2250,7 +2270,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3727,7 +3750,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4648,8 +4674,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4728,6 +4754,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4747,14 +4777,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4799,6 +4832,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2f8cbd86759..c8779efe183 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 3ba6cfe60f0..d409ea5f638 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 1f9bd58a4e2..e648adb8a0e 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -818,6 +818,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 3dc84074e63..1206c515a0a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index ff14256c0ab..3b549231a90 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12268,6 +12268,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..ad71a3ca84f 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,13 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,6 +97,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 082e2b3d86c..7b6fe125b99 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..2c4d1b78649 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..cf5904f3e06
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,227 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw a warning
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? parameters differ for the remote and local sequences \("public.regress_s5"\) for subscription "regress_seq_sub"/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ac0eb8ef27a..b27db84589f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1623,6 +1623,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250516-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250516-0005-Documentation-for-sequence-synchronization.patchDownload
From d6edadcfb4ca22389a5400dea72f2ebb25aaf8be Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 10:30:51 +0530
Subject: [PATCH v20250516 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 244 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 380 insertions(+), 42 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..31bbfe08d00 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8207,12 +8210,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 23d2b1be424..334c47e0034 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5168,9 +5168,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5311,8 +5311,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5335,10 +5335,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..bff3ea8bdc5 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,204 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged listing all differing
+     sequences before the process exits. The apply worker detects the failure
+     and repeatedly respawns the sequence synchronization worker to continue
+     the synchronization process until all differences are resolved. See also
+     <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2317,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2647,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2456,8 +2661,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..cac2c8bf7e3 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index b58c52ea50f..066a8c526db 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#222Nisha Moond
nisha.moond412@gmail.com
In reply to: vignesh C (#221)
Re: Logical Replication of sequences

Thanks for the comments, these are handled in the attached v20250516
version patch.

Thanks for the patches. Here are my review comments -

Patch-0004: src/backend/replication/logical/sequencesync.c

The sequence count logic using curr_seq in copy_sequences() seems buggy.
Currently, curr_seq is incremented based on the number of tuples
received from the publisher inside the inner while loop.
This means it's counting the number of sequences returned by the
publisher, not the number of sequences processed locally. This can
lead to two issues:

1) Repeated syncing of sequences:
If some sequences are missing on the publisher, curr_seq will reflect
fewer items than expected, and subsequent batches may reprocess
already-synced sequences. Because next batch will use curr_seq to get
values from the list as -

seqinfo = (LogicalRepSequenceInfo *)
lfirst(list_nth_cell(remotesequences, curr_seq + i));

Example:
For 110 sequences(s1 to s110), if 5 (s1 to s5) are missing on the
publisher in the first batch, curr_seq = 95. In the next cycle, we
resync s95 to s99.
~~~~

2) Risk of sequencesync worker getting stuck in infinite loop

Consider a case where remotesequences has 10 sequences (s1–s10) need
syncing, and concurrently s9, s10 are deleted on the publisher.

Cycle 1:
Publisher returns s1–s8. So curr_seq = 8.

Cycle 2:
Publisher query returns zero rows (as s9, s10 no longer exist).
curr_seq stays at 8 and never advances.

This causes the while (curr_seq < total_seq) loop to run forever.
~~~~

I think curr_seq should be incremented by batch_seq_count just
outside the inner while loop.

--
Thanks,
Nisha

#223shveta malik
shveta.malik@gmail.com
In reply to: Nisha Moond (#222)
Re: Logical Replication of sequences

On Tue, May 20, 2025 at 8:35 AM Nisha Moond <nisha.moond412@gmail.com> wrote:

Thanks for the comments, these are handled in the attached v20250516
version patch.

Thanks for the patches. Here are my review comments -

Patch-0004: src/backend/replication/logical/sequencesync.c

The sequence count logic using curr_seq in copy_sequences() seems buggy.
Currently, curr_seq is incremented based on the number of tuples
received from the publisher inside the inner while loop.
This means it's counting the number of sequences returned by the
publisher, not the number of sequences processed locally. This can
lead to two issues:

1) Repeated syncing of sequences:
If some sequences are missing on the publisher, curr_seq will reflect
fewer items than expected, and subsequent batches may reprocess
already-synced sequences. Because next batch will use curr_seq to get
values from the list as -

seqinfo = (LogicalRepSequenceInfo *)
lfirst(list_nth_cell(remotesequences, curr_seq + i));

Example:
For 110 sequences(s1 to s110), if 5 (s1 to s5) are missing on the
publisher in the first batch, curr_seq = 95. In the next cycle, we
resync s95 to s99.
~~~~

2) Risk of sequencesync worker getting stuck in infinite loop

Consider a case where remotesequences has 10 sequences (s1–s10) need
syncing, and concurrently s9, s10 are deleted on the publisher.

Cycle 1:
Publisher returns s1–s8. So curr_seq = 8.

Cycle 2:
Publisher query returns zero rows (as s9, s10 no longer exist).
curr_seq stays at 8 and never advances.

This causes the while (curr_seq < total_seq) loop to run forever.
~~~~

I think curr_seq should be incremented by batch_seq_count just
outside the inner while loop.

I faced the similar issue while testing. I think it is due to the
code-logic issue pointed out by Nisha above.

Test-scenario:
--Created 250 sequences on both pub and sub.
--There were 10 sequences mismatched.
--Sequence replication worked as expected. Logs look better now:

LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 250; batch #1 = 100 attempted, 97
succeeded, 3 mismatched
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 250; batch #2 = 100 attempted, 95
succeeded, 5 mismatched
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 250; batch #3 = 50 attempted, 48
succeeded, 2 mismatched

--Then I corrected a few and deleted 1 on pub, and the sequence sync
worker went into an infinite loop after that.

LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 10; batch #1004 = 1 attempted, 0
succeeded, 0 mismatched
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 10; batch #1005 = 1 attempted, 0
succeeded, 0 mismatched
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 10; batch #1006 = 1 attempted, 0
succeeded, 0 mismatched

thanks
Shveta

#224Peter Smith
smithpb2250@gmail.com
In reply to: shveta malik (#223)
Re: Logical Replication of sequences

Test-scenario:
--Created 250 sequences on both pub and sub.
--There were 10 sequences mismatched.
--Sequence replication worked as expected. Logs look better now:

LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 250; batch #1 = 100 attempted, 97
succeeded, 3 mismatched
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 250; batch #2 = 100 attempted, 95
succeeded, 5 mismatched
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 250; batch #3 = 50 attempted, 48
succeeded, 2 mismatched

When there are many batches required, it seems a bit strange to repeat
the same "total unsynchronized" over and over.

Would it be better to show the total number once, and thereafter show
the number of sequences remaining to be processed as they tick down?

e.g.
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized = 250
LOG: Logical replication sequence synchronization for subscription
"sub1" - batch #1 = 100 attempted, 97 succeeded, 3 mismatched, 150
remaining
LOG: Logical replication sequence synchronization for subscription
"sub1" - batch #2 = 100 attempted, 95 succeeded, 5 mismatched, 50
remaining
LOG: Logical replication sequence synchronization for subscription
"sub1" - batch #3 = 50 attempted, 48 succeeded, 2 mismatched, 0
remaining

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#225Nisha Moond
nisha.moond412@gmail.com
In reply to: Nisha Moond (#222)
Re: Logical Replication of sequences

On Tue, May 20, 2025 at 8:35 AM Nisha Moond <nisha.moond412@gmail.com> wrote:

Thanks for the comments, these are handled in the attached v20250516
version patch.

Thanks for the patches. Here are my review comments -

Patch-0004: src/backend/replication/logical/sequencesync.c

Hi,

Currently, the behavior of the internal query used to fetch sequence
info from the pub is inconsistent and potentially misleading.

case1: If a single non-existent sequence is passed (e.g., VALUES
('public','n10')), the query throws an ERROR, so we get error on sub -
ERROR: could not receive list of sequence information from the
publisher: ERROR: sequence "public.n10" does not exist

case2: If multiple non-existent sequences are passed (e.g., VALUES
('public','n8'),('public','n9')), it silently returns zero rows,
resulting only in a LOG message instead of an error.
LOG: Logical replication sequence synchronization for subscription
"subs" - total unsynchronized: 2; batch #1 = 2 attempted, 0 succeeded,
0 mismatched

IMO, This inconsistency can be confusing for users. I think we should
make the behavior uniform. Either -
(a) Raise an error if any/all of the requested sequences are missing
on the publisher, or
(b) Instead of raising an error, emit a LOG(as is done in case2) and
maybe include the count of missing sequences too.

I'm fine with either option.

--
Thanks,
Nisha

#226shveta malik
shveta.malik@gmail.com
In reply to: Peter Smith (#224)
Re: Logical Replication of sequences

On Tue, May 20, 2025 at 11:13 AM Peter Smith <smithpb2250@gmail.com> wrote:

Test-scenario:
--Created 250 sequences on both pub and sub.
--There were 10 sequences mismatched.
--Sequence replication worked as expected. Logs look better now:

LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 250; batch #1 = 100 attempted, 97
succeeded, 3 mismatched
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 250; batch #2 = 100 attempted, 95
succeeded, 5 mismatched
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 250; batch #3 = 50 attempted, 48
succeeded, 2 mismatched

When there are many batches required, it seems a bit strange to repeat
the same "total unsynchronized" over and over.

Would it be better to show the total number once, and thereafter show
the number of sequences remaining to be processed as they tick down?

e.g.
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized = 250
LOG: Logical replication sequence synchronization for subscription
"sub1" - batch #1 = 100 attempted, 97 succeeded, 3 mismatched, 150
remaining
LOG: Logical replication sequence synchronization for subscription
"sub1" - batch #2 = 100 attempted, 95 succeeded, 5 mismatched, 50
remaining
LOG: Logical replication sequence synchronization for subscription
"sub1" - batch #3 = 50 attempted, 48 succeeded, 2 mismatched, 0
remaining

+1 on log change suggestions.

Please find few more comments:

1)
Temporary sequences will not be replicated, shall we mention this in
doc under '29.7. Replicating Sequences'?

2)
CREATE publication pub1 for all sequences WITH (publish = 'insert,
update, truncate');

I think it does not make sense to give 'publish' as above (or
publish_via_partition_root) for 'all sequences' publication. Shall we
display a WARNING that such will be ignored for 'all sequences' and
let the create-publication go ahead? Thoughts? Also the doc for
publish* option in the CREATE-PUBLICATION page needs to specify that
these options are not-applicable for ALL SEQUENCES publication.

3)
It will be good to move create_publication.sgml as well to the last
patch where all other doc changes are present. I was trying to find
this change in the last patch but ultimately found it in pacth002.

4)
Currently the log is:

------
LOG: logical replication sequence synchronization worker for
subscription "sub1" has started
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 1; batch #1 = 1 attempted, 0 succeeded,
1 mismatched
WARNING: parameters differ for the remote and local sequences
("public.myseq34") for subscription "sub1"
HINT: Alter/Re-create local sequences to have the same parameters as
the remote sequences.
WARNING: sequence synchronization worker failed: one or more
sequences have mismatched parameters between the publisher and
subscriber
LOG: logical replication sequence synchronization worker for
subscription "sub1" has finished
-----

Do we need both?
--WARNING: sequence synchronization worker failed.
--LOG: logical replication sequence synchronization worker for
subscription "sub1" has finished

This WARNING repeats previously stated information. I feel we can get
rid of it, unless there is a chance of some new error which we are
trying to display in this WARNING other than mismatched seq error?

thanks
Shveta

#227vignesh C
vignesh21@gmail.com
In reply to: Nisha Moond (#222)
5 attachment(s)
Re: Logical Replication of sequences

On Tue, 20 May 2025 at 08:35, Nisha Moond <nisha.moond412@gmail.com> wrote:

Thanks for the comments, these are handled in the attached v20250516
version patch.

Thanks for the patches. Here are my review comments -

Patch-0004: src/backend/replication/logical/sequencesync.c

The sequence count logic using curr_seq in copy_sequences() seems buggy.
Currently, curr_seq is incremented based on the number of tuples
received from the publisher inside the inner while loop.
This means it's counting the number of sequences returned by the
publisher, not the number of sequences processed locally. This can
lead to two issues:

1) Repeated syncing of sequences:
If some sequences are missing on the publisher, curr_seq will reflect
fewer items than expected, and subsequent batches may reprocess
already-synced sequences. Because next batch will use curr_seq to get
values from the list as -

seqinfo = (LogicalRepSequenceInfo *)
lfirst(list_nth_cell(remotesequences, curr_seq + i));

Example:
For 110 sequences(s1 to s110), if 5 (s1 to s5) are missing on the
publisher in the first batch, curr_seq = 95. In the next cycle, we
resync s95 to s99.
~~~~

2) Risk of sequencesync worker getting stuck in infinite loop

Consider a case where remotesequences has 10 sequences (s1–s10) need
syncing, and concurrently s9, s10 are deleted on the publisher.

Cycle 1:
Publisher returns s1–s8. So curr_seq = 8.

Cycle 2:
Publisher query returns zero rows (as s9, s10 no longer exist).
curr_seq stays at 8 and never advances.

This causes the while (curr_seq < total_seq) loop to run forever.

These are handled in the attached v20250521 version patch.
Also the issue reported at [1]/messages/by-id/CAHut+PstucunJLQn8C=bewmYdoSQStBcEJgG2bkZJUZnTowhFQ@mail.gmail.com is handled in the attached patch.

[1]: /messages/by-id/CAHut+PstucunJLQn8C=bewmYdoSQStBcEJgG2bkZJUZnTowhFQ@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250521-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250521-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 7b97d642312dbd3875c202c3c412619f17f841f2 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250521 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index b405525a465..7423f2413a3 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..01cd0e07fc2 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62beb71da28..ff14256c0ab 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250521-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250521-0005-Documentation-for-sequence-synchronization.patchDownload
From 1486671afbb957f16b211b17ca5a582802f82fc4 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Wed, 21 May 2025 13:51:53 +0530
Subject: [PATCH v20250521 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 244 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 7 files changed, 389 insertions(+), 42 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..31bbfe08d00 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8207,12 +8210,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 23d2b1be424..334c47e0034 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5168,9 +5168,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5311,8 +5311,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5335,10 +5335,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..bff3ea8bdc5 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,204 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged listing all differing
+     sequences before the process exits. The apply worker detects the failure
+     and repeatedly respawns the sequence synchronization worker to continue
+     the synchronization process until all differences are resolved. See also
+     <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2317,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2647,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2456,8 +2661,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..8fa27144da8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index b58c52ea50f..066a8c526db 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250521-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250521-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 92d811331941d304f3d1505470f362d1b38e5607 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250521 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..3d405ff2dc6
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..9bd51ceef48 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 4151a4b2a96..765754bfc3c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1030,7 +1030,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1152,7 +1152,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1208,7 +1208,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1274,7 +1274,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1409,7 +1409,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2251,7 +2251,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3728,7 +3728,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4797,7 +4797,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..082e2b3d86c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void SyncFinishWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d3e32001b20..ac0eb8ef27a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2903,7 +2903,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250521-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250521-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 7f07d61f7d9588a6ee342295b505b45bf8ed5423 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Mon, 3 Feb 2025 09:53:31 +0530
Subject: [PATCH v20250521 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 doc/src/sgml/ref/create_publication.sgml  |  63 ++-
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 15 files changed, 760 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..fe6fb417f3d 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +121,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +152,26 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -279,10 +290,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +309,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +461,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 0b5652071d1..3d7b9bec86c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19638,6 +19671,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c73e73a87d1..414fbe14553 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7417eab6aef..1f9bd58a4e2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -661,6 +661,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index cf34f71ea11..5fb1786ec03 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3214,6 +3214,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec65ab79fec..3dc84074e63 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3530,12 +3530,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9ea573fae21..d3e32001b20 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2343,6 +2343,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250521-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250521-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 0453fd2936936fb1d93f2eb65bc61ddd97794f08 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 13 May 2025 21:11:17 +0530
Subject: [PATCH v20250521 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  11 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       | 322 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 653 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 +-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |  11 +-
 src/include/catalog/pg_subscription_rel.h     |  12 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |  11 +-
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 226 ++++++
 src/tools/pgindent/typedefs.list              |   1 +
 34 files changed, 1587 insertions(+), 204 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 15efb02badb..9b419750489 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
@@ -1391,6 +1401,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 01cd0e07fc2..8bbacd21ce9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..83be0bae062 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +920,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +949,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +974,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +991,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1009,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1055,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1495,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1510,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1551,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1570,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1594,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1607,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				break;
+			}
+
+			case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1750,7 +1868,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1891,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2205,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3d7b9bec86c..34c52747d2f 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..fb3be0236de 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..c4e0a90bd27
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,653 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * Get the sequence object from the list of sequences.
+ */
+static LogicalRepSequenceInfo *
+get_sequence_obj(List *sequences, char *nspname, char *seqname)
+{
+	foreach_ptr(LogicalRepSequenceInfo, seqinfo, sequences)
+	{
+		if (!strcmp(seqinfo->nspname, nspname) &&
+			!strcmp(seqinfo->seqname, seqname))
+			return seqinfo;
+	}
+
+	return NULL;
+}
+
+/*
+ * report_error_sequences
+ *
+ * Logs a warning listing all sequences that are missing on the publisher,
+ * as well as those with value mismatches relative to the subscriber.
+ */
+static void
+report_error_sequences(List *sequences, StringInfo mismatched_seqs)
+{
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	combined_error_msg = makeStringInfo();
+	bool		has_missing_seqs = false;
+	bool		has_mismatched_seqs = (mismatched_seqs->len > 0);
+
+	/* Identify missing sequences */
+	foreach_ptr(LogicalRepSequenceInfo, seqinfo, sequences)
+	{
+		if (!seqinfo->remote_seq_fetched)
+		{
+			if (missing_seqs->len > 0)
+				appendStringInfoString(missing_seqs, ", ");
+
+			appendStringInfo(missing_seqs, "\"%s\".\"%s\"",
+							 seqinfo->nspname, seqinfo->seqname);
+			has_missing_seqs = true;
+		}
+	}
+
+	if (has_missing_seqs || has_mismatched_seqs)
+	{
+		appendStringInfo(combined_error_msg, "logical replication sequence synchronization failed for subscription \"%s\":",
+						 MySubscription->name);
+
+		if (has_missing_seqs)
+			appendStringInfo(combined_error_msg, " sequences (%s) are missing on the publisher.",
+							 missing_seqs->data);
+
+		if (has_mismatched_seqs)
+		{
+			/* Add a separator if both types of errors exist */
+			if (has_missing_seqs)
+				appendStringInfoString(combined_error_msg, " Additionally,");
+
+			appendStringInfo(combined_error_msg, " parameters differ for the remote and local sequences (%s)",
+							 mismatched_seqs->data);
+		}
+
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("%s", combined_error_msg->data));
+	}
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(combined_error_msg);
+}
+
+/*
+ * append_mismatched_sequences
+ *
+ * Appends schema name and sequence name of sequences that have discrepancies
+ * between the publisher and subscriber to the mismatched_seqs string.
+ */
+static void
+append_mismatched_sequences(StringInfo mismatched_seqs,
+							LogicalRepSequenceInfo *seqinfo)
+{
+	if (mismatched_seqs->len)
+		appendStringInfoString(mismatched_seqs, ", ");
+
+	appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local
+ * relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid)
+{
+	int			total_seqs = list_length(remotesequences);
+	int			current_index = 0;
+	int			current_batch = 1;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	bool		sequence_sync_error = false;
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it iscurrently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfoData cmd;
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+		int			col = 0;
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+
+		StringInfo	seqstr = makeStringInfo();
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(&cmd);
+		appendStringInfo(&cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd.data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			bool		seq_params_match;
+			Form_pg_sequence seqform;
+			bool		isnull;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+			col = 0;
+
+			seqinfo = get_sequence_obj(remotesequences, nspname, seqname);
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			seq_params_match = seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement;
+
+			ReleaseSysCache(tup);
+
+			/* Update the sequence only if the parameters are identical. */
+			if (seq_params_match)
+			{
+				SetSequence(seqinfo->localrelid, last_value, is_called, log_cnt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											get_subscription_name(subid, false),
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				append_mismatched_sequences(mismatched_seqs, seqinfo);
+				batch_mismatched_count++;
+			}
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   get_subscription_name(subid, false), current_batch,
+					   batch_size, batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		pfree(cmd.data);
+
+		/* Commit this batch, and prepare for next batch. */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization was incomplete for this batch due to
+		 * missing sequences on the publisher or mismatched parameters.
+		 */
+		if (batch_succeeded_count + batch_mismatched_count < batch_size ||
+			batch_mismatched_count)
+			sequence_sync_error = true;
+
+		current_index += batch_size;
+		current_batch++;
+	}
+
+	if (sequence_sync_error)
+		report_error_sequences(remotesequences, mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfoData app_name;
+	List	   *remotesequences = NIL;
+	char	   *nspname;
+	char	   *seqname;
+	LogicalRepSequenceInfo *seq_info;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+
+	StartTransactionCommand();
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = seqinfo->relid;
+		seq_info->remote_seq_fetched = false;
+		remotesequences = lappend(remotesequences, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	copy_sequences(LogRepWorkerWalRcvConn, remotesequences, subid);
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 3d405ff2dc6..1d7d7543af5 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9bd51ceef48..6ed37bb57b9 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1516,7 +1516,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1561,7 +1562,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1570,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1584,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 765754bfc3c..ed6c2da04e9 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1029,7 +1034,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1151,7 +1159,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1207,7 +1218,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1273,7 +1287,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1408,7 +1425,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2250,7 +2270,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3727,7 +3750,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4536,7 +4562,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4648,8 +4675,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4728,6 +4755,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4747,14 +4778,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4799,6 +4833,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4825,6 +4862,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4837,9 +4878,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 97af7c6554f..7bf3b8d40e5 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2f8cbd86759..c8779efe183 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 414fbe14553..eaab91ef456 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 1f9bd58a4e2..e648adb8a0e 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -818,6 +818,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 3dc84074e63..1206c515a0a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index ff14256c0ab..20ac0802b41 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5695,9 +5695,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
@@ -12268,6 +12268,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..ab10442e872 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,6 +98,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 378f2f2c2ba..03d4df572f4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 082e2b3d86c..7b6fe125b99 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..9623240915c 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2171,6 +2179,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2181,7 +2190,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..979756894b7
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,226 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub": parameters differ for the remote and local sequences \("public.regress_s5"\)/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ac0eb8ef27a..b27db84589f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1623,6 +1623,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

#228vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#223)
Re: Logical Replication of sequences

On Tue, 20 May 2025 at 09:54, shveta malik <shveta.malik@gmail.com> wrote:

On Tue, May 20, 2025 at 8:35 AM Nisha Moond <nisha.moond412@gmail.com> wrote:

Thanks for the comments, these are handled in the attached v20250516
version patch.

Thanks for the patches. Here are my review comments -

Patch-0004: src/backend/replication/logical/sequencesync.c

The sequence count logic using curr_seq in copy_sequences() seems buggy.
Currently, curr_seq is incremented based on the number of tuples
received from the publisher inside the inner while loop.
This means it's counting the number of sequences returned by the
publisher, not the number of sequences processed locally. This can
lead to two issues:

1) Repeated syncing of sequences:
If some sequences are missing on the publisher, curr_seq will reflect
fewer items than expected, and subsequent batches may reprocess
already-synced sequences. Because next batch will use curr_seq to get
values from the list as -

seqinfo = (LogicalRepSequenceInfo *)
lfirst(list_nth_cell(remotesequences, curr_seq + i));

Example:
For 110 sequences(s1 to s110), if 5 (s1 to s5) are missing on the
publisher in the first batch, curr_seq = 95. In the next cycle, we
resync s95 to s99.
~~~~

2) Risk of sequencesync worker getting stuck in infinite loop

Consider a case where remotesequences has 10 sequences (s1–s10) need
syncing, and concurrently s9, s10 are deleted on the publisher.

Cycle 1:
Publisher returns s1–s8. So curr_seq = 8.

Cycle 2:
Publisher query returns zero rows (as s9, s10 no longer exist).
curr_seq stays at 8 and never advances.

This causes the while (curr_seq < total_seq) loop to run forever.
~~~~

I think curr_seq should be incremented by batch_seq_count just
outside the inner while loop.

I faced the similar issue while testing. I think it is due to the
code-logic issue pointed out by Nisha above.

Yes, it is the same issue, this has been fixed in the v20250521
version posted at [1]/messages/by-id/CALDaNm2ZgyYbowqZJfpkpRV_tev5o-rqpkLDkp496ku15Tdsqw@mail.gmail.com.
[1]: /messages/by-id/CALDaNm2ZgyYbowqZJfpkpRV_tev5o-rqpkLDkp496ku15Tdsqw@mail.gmail.com

Regards,
Vignesh

#229vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#226)
5 attachment(s)
Re: Logical Replication of sequences

On Wed, 21 May 2025 at 16:11, shveta malik <shveta.malik@gmail.com> wrote:

Please find few more comments:

1)
Temporary sequences will not be replicated, shall we mention this in
doc under '29.7. Replicating Sequences'?

I added it create publication "ALL SEQUENCES" section as similar
restriction about table is mentioned there

2)
CREATE publication pub1 for all sequences WITH (publish = 'insert,
update, truncate');

I think it does not make sense to give 'publish' as above (or
publish_via_partition_root) for 'all sequences' publication. Shall we
display a WARNING that such will be ignored for 'all sequences' and
let the create-publication go ahead? Thoughts? Also the doc for
publish* option in the CREATE-PUBLICATION page needs to specify that
these options are not-applicable for ALL SEQUENCES publication.

I felt no need to add a warning, just adding to documentation would be enough.

3)
It will be good to move create_publication.sgml as well to the last
patch where all other doc changes are present. I was trying to find
this change in the last patch but ultimately found it in pacth002.

Moved

4)
Currently the log is:

------
LOG: logical replication sequence synchronization worker for
subscription "sub1" has started
LOG: Logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 1; batch #1 = 1 attempted, 0 succeeded,
1 mismatched
WARNING: parameters differ for the remote and local sequences
("public.myseq34") for subscription "sub1"
HINT: Alter/Re-create local sequences to have the same parameters as
the remote sequences.
WARNING: sequence synchronization worker failed: one or more
sequences have mismatched parameters between the publisher and
subscriber
LOG: logical replication sequence synchronization worker for
subscription "sub1" has finished
-----

Do we need both?

Removed it.

The attached v20250522 patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250522-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250522-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 7b97d642312dbd3875c202c3c412619f17f841f2 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250522 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index b405525a465..7423f2413a3 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..01cd0e07fc2 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	relation_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62beb71da28..ff14256c0ab 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250522-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250522-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 3f4f39131c976634201813a0712c564621eac41e Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250522 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..3d405ff2dc6
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..9bd51ceef48 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 4151a4b2a96..765754bfc3c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1030,7 +1030,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1152,7 +1152,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1208,7 +1208,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1274,7 +1274,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1409,7 +1409,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2251,7 +2251,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3728,7 +3728,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4797,7 +4797,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..082e2b3d86c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void SyncFinishWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d3e32001b20..ac0eb8ef27a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2903,7 +2903,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250522-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250522-0005-Documentation-for-sequence-synchronization.patchDownload
From c300be87eb6703581fd0f833147660e1d0e6a07e Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250522 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 244 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  79 +++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 8 files changed, 450 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..31bbfe08d00 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8207,12 +8210,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 23d2b1be424..334c47e0034 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5168,9 +5168,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5311,8 +5311,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5335,10 +5335,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f288c049a5c..bff3ea8bdc5 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1786,6 +1790,204 @@ test_sub=# SELECT * from tab_gen_to_gen;
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged listing all differing
+     sequences before the process exits. The apply worker detects the failure
+     and repeatedly respawns the sequence synchronization worker to continue
+     the synchronization process until all differences are resolved. See also
+     <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2115,16 +2317,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2442,8 +2647,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2456,8 +2661,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..8fa27144da8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..bfedf51bfdb 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +121,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +152,31 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -188,6 +204,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           for logical replication does not take this parameter into account when
           copying existing table data.
          </para>
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -224,6 +243,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           See <xref linkend="logical-replication-gencols"/> for more details about
           logical replication of generated columns.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -259,6 +282,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           If this is enabled, <literal>TRUNCATE</literal> operations performed
           directly on partitions are not replicated.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
       </variablelist></para>
@@ -279,10 +306,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +325,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +477,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index b58c52ea50f..066a8c526db 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250522-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250522-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From c7ecf92c2852a4e0ce5aacaf3bd4bd0fe4d7bb17 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:15:34 +0530
Subject: [PATCH v20250522 2/5] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 715 insertions(+), 335 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 0b5652071d1..3d7b9bec86c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19638,6 +19671,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c73e73a87d1..414fbe14553 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7417eab6aef..1f9bd58a4e2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -661,6 +661,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index cf34f71ea11..5fb1786ec03 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3214,6 +3214,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec65ab79fec..3dc84074e63 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3530,12 +3530,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9ea573fae21..d3e32001b20 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2343,6 +2343,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250522-0004-Enhance-sequence-synchronization-during-su.patchtext/x-patch; charset=US-ASCII; name=v20250522-0004-Enhance-sequence-synchronization-during-su.patchDownload
From 5434287a70e372e9e530c55939958f8a058e25c4 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 13 May 2025 21:11:17 +0530
Subject: [PATCH v20250522 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  11 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       | 322 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 625 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 ++-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |  11 +-
 src/include/catalog/pg_subscription_rel.h     |  12 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |  11 +-
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 226 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 34 files changed, 1559 insertions(+), 204 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 15efb02badb..9b419750489 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
@@ -1391,6 +1401,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 01cd0e07fc2..8bbacd21ce9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..83be0bae062 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +920,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +949,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +974,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +991,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1009,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1055,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1495,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1510,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1551,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1570,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1594,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1607,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				break;
+			}
+
+			case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1750,7 +1868,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1891,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2205,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3d7b9bec86c..34c52747d2f 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..fb3be0236de 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..c5763d217d9
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,625 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * report_error_sequences
+ *
+ * Logs a warning listing all sequences that are missing on the publisher,
+ * as well as those with value mismatches relative to the subscriber.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_msg = makeStringInfo();
+
+	appendStringInfo(combined_error_msg, "logical replication sequence synchronization failed for subscription \"%s\":",
+					 MySubscription->name);
+
+	if (missing_seqs->len)
+		appendStringInfo(combined_error_msg, " sequences (%s) are missing on the publisher.",
+						 missing_seqs->data);
+
+	if (mismatched_seqs->len)
+	{
+		/* Add a separator if both types of errors exist */
+		if (missing_seqs->len)
+			appendStringInfoString(combined_error_msg, " Additionally,");
+
+		appendStringInfo(combined_error_msg, " parameters differ for the remote and local sequences (%s)",
+						 mismatched_seqs->data);
+	}
+
+	ereport(ERROR, errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("%s", combined_error_msg->data));
+
+	destroyStringInfo(combined_error_msg);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid)
+{
+	int			total_seqs = list_length(remotesequences);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it iscurrently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *sequence_info = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+				if (!strcmp(sequence_info->nspname, nspname) &&
+					!strcmp(sequence_info->seqname, seqname))
+					seqinfo = sequence_info;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, is_called, log_cnt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											MySubscription->name,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s\".\"%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfoData app_name;
+	List	   *remotesequences = NIL;
+	char	   *nspname;
+	char	   *seqname;
+	LogicalRepSequenceInfo *seq_info;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+
+	StartTransactionCommand();
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = seqinfo->relid;
+		seq_info->remote_seq_fetched = false;
+		remotesequences = lappend(remotesequences, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	copy_sequences(LogRepWorkerWalRcvConn, remotesequences, subid);
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 3d405ff2dc6..1d7d7543af5 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9bd51ceef48..6ed37bb57b9 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1516,7 +1516,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1561,7 +1562,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1570,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1584,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 765754bfc3c..ed6c2da04e9 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1029,7 +1034,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1151,7 +1159,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1207,7 +1218,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1273,7 +1287,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1408,7 +1425,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2250,7 +2270,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3727,7 +3750,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4536,7 +4562,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4648,8 +4675,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4728,6 +4755,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4747,14 +4778,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4799,6 +4833,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4825,6 +4862,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4837,9 +4878,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 97af7c6554f..7bf3b8d40e5 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2f8cbd86759..c8779efe183 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 414fbe14553..eaab91ef456 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 1f9bd58a4e2..e648adb8a0e 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -818,6 +818,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 3dc84074e63..1206c515a0a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index ff14256c0ab..20ac0802b41 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5695,9 +5695,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
@@ -12268,6 +12268,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..ab10442e872 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,6 +98,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 378f2f2c2ba..03d4df572f4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 082e2b3d86c..7b6fe125b99 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..9623240915c 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2171,6 +2179,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2181,7 +2190,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..979756894b7
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,226 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub": parameters differ for the remote and local sequences \("public.regress_s5"\)/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ac0eb8ef27a..b27db84589f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1623,6 +1623,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

#230Nisha Moond
nisha.moond412@gmail.com
In reply to: vignesh C (#229)
Re: Logical Replication of sequences

On Thu, May 22, 2025 at 10:42 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20250522 patch has the changes for the same.

Thank you for the patches, please find comments for patch-0004.

1)
+/*
+ * report_error_sequences
+ *
+ * Logs a warning listing all sequences that are missing on the publisher,
+ * as well as those with value mismatches relative to the subscriber.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)

The function description should be updated to reflect the recent
changes, as it now raises an error instead of issuing a warning.

2)
+ ereport(ERROR, errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("%s", combined_error_msg->data));
+
+ destroyStringInfo(combined_error_msg);
+}

I think we can remove destroyStringInfo() as we will never reach here
in case of error.

3)
+ * we want to avoid keeping this batch transaction open for extended
+ * periods so it iscurrently limited to 100 sequences per batch.
+ */

typo : iscurrently / is currently

4)
+ HeapTuple tup;
+ Form_pg_sequence seqform;
+ LogicalRepSequenceInfo *seqinfo;
+
[...]
+ Assert(seqinfo);

Since there's an assertion for 'seqinfo', it would be safer to
initialize it to NULL to avoid any unexpected behavior.

6)
+ if (missing_seqs->len || mismatched_seqs->len)
+ report_error_sequences(missing_seqs, mismatched_seqs);

I think it would be helpful to add a comment for this check, perhaps
something like:
/*
* Report an error if any sequences are missing on the remote side
* or if local sequence parameters don't match with the remote ones.
*/
Please rephrase if needed.
~~~~

--
Thanks,
Nisha

#231Ajin Cherian
itsajin@gmail.com
In reply to: vignesh C (#229)
Re: Logical Replication of sequences

On Fri, May 23, 2025 at 3:12 AM vignesh C <vignesh21@gmail.com> wrote:

The attached v20250522 patch has the changes for the same.

Regards,
Vignesh

Some review comments for patch 0001:
1. In src/backend/commands/sequence.c
in pg_sequence_state()
+ /* open and lock sequence */
+ init_sequence(seq_relid, &elm, &seqrel);
+
+ if (pg_class_aclcheck(elm->relid, GetUserId(),
+  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+ ereport(ERROR,
+ errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied for sequence %s",
+   RelationGetRelationName(seqrel)));
+

How about using aclcheck_error for this, which also supports error messages
for specific access errors. Most other objects seem to be using this.
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, get_relkind_objtype(seqrel->relkind),
RelationGetRelationName(seqrel));

2. in function pg_sequence_state()
+
+ UnlockReleaseBuffer(buf);
+ relation_close(seqrel, NoLock);

Ideally the corresponding close for init_sequence is sequence_close()
rather than relation_close()

I will post comments for the other patches as well.

regards,
Ajin Cherian
Fujitsu Australia

#232vignesh C
vignesh21@gmail.com
In reply to: Nisha Moond (#230)
5 attachment(s)
Re: Logical Replication of sequences

On Wed, 28 May 2025 at 20:52, Nisha Moond <nisha.moond412@gmail.com> wrote:

On Thu, May 22, 2025 at 10:42 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20250522 patch has the changes for the same.

Thank you for the patches, please find comments for patch-0004.

1)
+/*
+ * report_error_sequences
+ *
+ * Logs a warning listing all sequences that are missing on the publisher,
+ * as well as those with value mismatches relative to the subscriber.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)

The function description should be updated to reflect the recent
changes, as it now raises an error instead of issuing a warning.

2)
+ ereport(ERROR, errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("%s", combined_error_msg->data));
+
+ destroyStringInfo(combined_error_msg);
+}

I think we can remove destroyStringInfo() as we will never reach here
in case of error.

3)
+ * we want to avoid keeping this batch transaction open for extended
+ * periods so it iscurrently limited to 100 sequences per batch.
+ */

typo : iscurrently / is currently

4)
+ HeapTuple tup;
+ Form_pg_sequence seqform;
+ LogicalRepSequenceInfo *seqinfo;
+
[...]
+ Assert(seqinfo);

Since there's an assertion for 'seqinfo', it would be safer to
initialize it to NULL to avoid any unexpected behavior.

6)
+ if (missing_seqs->len || mismatched_seqs->len)
+ report_error_sequences(missing_seqs, mismatched_seqs);

I think it would be helpful to add a comment for this check, perhaps
something like:
/*
* Report an error if any sequences are missing on the remote side
* or if local sequence parameters don't match with the remote ones.
*/
Please rephrase if needed.

These comments are handled in the attached v2025029 version patch.

Regards,
Vignesh

Attachments:

v2025029-0001-Introduce-pg_sequence_state-function-for-en.patchtext/x-patch; charset=US-ASCII; name=v2025029-0001-Introduce-pg_sequence_state-function-for-en.patchDownload
From 36ea13e5132b11c0717cff9d156035c1c6a74adc Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v2025029 1/5] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index c67688cbf5f..9a442df3ba5 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..cf357650a24 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	sequence_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 37a484147a8..a160c2e61a8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v2025029-0004-Enhance-sequence-synchronization-during-sub.patchtext/x-patch; charset=US-ASCII; name=v2025029-0004-Enhance-sequence-synchronization-during-sub.patchDownload
From 1bb89e8e11abae89243b782581ab42abf794b8c3 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 13 May 2025 21:11:17 +0530
Subject: [PATCH v2025029 4/5] Enhance sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_publication.c          |  46 ++
 src/backend/catalog/pg_subscription.c         |  63 +-
 src/backend/catalog/system_views.sql          |  11 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       | 322 +++++++--
 src/backend/executor/execReplication.c        |   4 +-
 src/backend/parser/gram.y                     |  11 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 628 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  75 ++-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/bin/psql/tab-complete.in.c                |   2 +-
 src/include/catalog/pg_proc.dat               |  11 +-
 src/include/catalog/pg_subscription_rel.h     |  12 +-
 src/include/commands/sequence.h               |   3 +
 src/include/nodes/parsenodes.h                |   3 +-
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/regress/expected/rules.out           |  11 +-
 src/test/regress/expected/subscription.out    |   4 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 226 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 34 files changed, 1562 insertions(+), 204 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 617ed0b82c9..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1370,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..68b55bb5ea5 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * all_states:
+ * If getting tables, if all_states is true get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if all_states is true get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool all_states)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -545,7 +576,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 				BTEqualStrategyNumber, F_OIDEQ,
 				ObjectIdGetDatum(subid));
 
-	if (not_ready)
+	if (!all_states)
 		ScanKeyInit(&skey[nkeys++],
 					Anum_pg_subscription_rel_srsubstate,
 					BTEqualStrategyNumber, F_CHARNE,
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 08f780a2e63..dde8a71b84d 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
@@ -1386,6 +1396,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf357650a24..8073066f488 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled, SEQ_LOG_CNT_INVALID);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..83be0bae062 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * this origin is unnecessary. It can be created later during the ALTER
+	 * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+	 * include tables.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +711,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +725,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +740,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +766,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +795,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +834,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +920,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, true);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +949,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +974,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +991,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1009,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1055,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1495,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1510,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1551,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1570,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1594,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1607,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				break;
+			}
+
+			case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1750,7 +1868,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -1773,7 +1891,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, false);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2205,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2245,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2430,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3d7b9bec86c..34c52747d2f 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 10677da56b2..fb3be0236de 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..3cc208bf0d7
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,628 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_msg = makeStringInfo();
+
+	appendStringInfo(combined_error_msg, "logical replication sequence synchronization failed for subscription \"%s\":",
+					 MySubscription->name);
+
+	if (missing_seqs->len)
+		appendStringInfo(combined_error_msg, " sequences (%s) are missing on the publisher.",
+						 missing_seqs->data);
+
+	if (mismatched_seqs->len)
+	{
+		/* Add a separator if both types of errors exist */
+		if (missing_seqs->len)
+			appendStringInfoString(combined_error_msg, " Additionally,");
+
+		appendStringInfo(combined_error_msg, " parameters differ for the remote and local sequences (%s)",
+						 mismatched_seqs->data);
+	}
+
+	ereport(ERROR, errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("%s", combined_error_msg->data));
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid)
+{
+	int			total_seqs = list_length(remotesequences);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *sequence_info = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+				if (!strcmp(sequence_info->nspname, nspname) &&
+					!strcmp(sequence_info->seqname, seqname))
+					seqinfo = sequence_info;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, is_called, log_cnt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											MySubscription->name,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfoData app_name;
+	List	   *remotesequences = NIL;
+	char	   *nspname;
+	char	   *seqname;
+	LogicalRepSequenceInfo *seq_info;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, false);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+
+	StartTransactionCommand();
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = seqinfo->relid;
+		seq_info->remote_seq_fetched = false;
+		remotesequences = lappend(remotesequences, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	copy_sequences(LogRepWorkerWalRcvConn, remotesequences, subid);
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	SyncFinishWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 3d405ff2dc6..1d7d7543af5 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-SyncFinishWorker(void)
+SyncFinishWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ SyncFinishWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 SyncProcessRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ SyncProcessRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,16 +174,19 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   false);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
@@ -161,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -186,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 9bd51ceef48..6ed37bb57b9 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		SyncFinishWorker();
+		SyncFinishWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			SyncFinishWorker(); /* doesn't return */
+			SyncFinishWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1516,7 +1516,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1561,7 +1562,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1570,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	SyncFinishWorker();
+	SyncFinishWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1584,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 765754bfc3c..ed6c2da04e9 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1029,7 +1034,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1151,7 +1159,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -1207,7 +1218,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1273,7 +1287,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1408,7 +1425,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
@@ -2250,7 +2270,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3727,7 +3750,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			SyncProcessRelations(last_received);
 		}
 
@@ -4536,7 +4562,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4648,8 +4675,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4728,6 +4755,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4747,14 +4778,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4799,6 +4833,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4825,6 +4862,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4837,9 +4878,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 97af7c6554f..7bf3b8d40e5 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2f8cbd86759..c8779efe183 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 6ca8cd344a8..a3594186611 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 1f9bd58a4e2..e648adb8a0e 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -818,6 +818,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 3dc84074e63..1206c515a0a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index a160c2e61a8..b46df621b19 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5695,9 +5695,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
@@ -12258,6 +12258,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..ab10442e872 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,6 +98,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool all_states);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..26e3c9096ae 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9b9656dd6e3..e3db33e85fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4321,7 +4321,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 378f2f2c2ba..03d4df572f4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 082e2b3d86c..7b6fe125b99 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void SyncFinishWorker(void);
+pg_noreturn extern void SyncFinishWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void SyncProcessRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..9623240915c 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2171,6 +2179,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2181,7 +2190,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..979756894b7
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,226 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub": parameters differ for the remote and local sequences \("public.regress_s5"\)/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 36af636a4c0..4f0d87b8c00 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1623,6 +1623,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v2025029-0003-Reorganize-tablesync-Code-and-Introduce-syn.patchtext/x-patch; charset=US-ASCII; name=v2025029-0003-Reorganize-tablesync-Code-and-Introduce-syn.patchDownload
From b0e61bd594c3d7f45cae9268e8ba9ab57828a531 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v2025029 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..3d405ff2dc6
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+SyncFinishWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+SyncProcessRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..9bd51ceef48 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the SyncFinishWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		SyncFinishWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			SyncFinishWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	SyncFinishWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 4151a4b2a96..765754bfc3c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1030,7 +1030,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1152,7 +1152,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1208,7 +1208,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1274,7 +1274,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	SyncProcessRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1409,7 +1409,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	SyncProcessRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2251,7 +2251,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	SyncProcessRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3728,7 +3728,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			SyncProcessRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4797,7 +4797,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..082e2b3d86c 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void SyncFinishWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void SyncProcessRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 2351d9644f7..36af636a4c0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2899,7 +2899,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v2025029-0005-Documentation-for-sequence-synchronization-.patchtext/x-patch; charset=US-ASCII; name=v2025029-0005-Documentation-for-sequence-synchronization-.patchDownload
From eb41ce69ecd656647e54edaf2dc15e2687236b5c Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v2025029 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 244 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  79 +++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 8 files changed, 450 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index cbd4e40a320..31bbfe08d00 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8155,16 +8155,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8198,7 +8201,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8207,12 +8210,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index f4a0191c55b..df4ae4b2854 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5184,9 +5184,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5327,8 +5327,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5351,10 +5351,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 686dd441d02..cc5cd986d92 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1711,6 +1715,204 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged listing all differing
+     sequences before the process exits. The apply worker detects the failure
+     and repeatedly respawns the sequence synchronization worker to continue
+     the synchronization process until all differences are resolved. See also
+     <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2040,16 +2242,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2367,8 +2572,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2381,8 +2586,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..8fa27144da8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..bfedf51bfdb 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,15 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +121,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +152,31 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -188,6 +204,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           for logical replication does not take this parameter into account when
           copying existing table data.
          </para>
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -224,6 +243,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           See <xref linkend="logical-replication-gencols"/> for more details about
           logical replication of generated columns.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -259,6 +282,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           If this is enabled, <literal>TRUNCATE</literal> operations performed
           directly on partitions are not replicated.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
       </variablelist></para>
@@ -279,10 +306,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +325,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +477,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index b58c52ea50f..066a8c526db 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v2025029-0002-Introduce-ALL-SEQUENCES-support-for-Postgre.patchtext/x-patch; charset=US-ASCII; name=v2025029-0002-Introduce-ALL-SEQUENCES-support-for-Postgre.patchDownload
From 2b344cf827068356f1d725ae079ce1bd9d93f089 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:15:34 +0530
Subject: [PATCH v2025029 2/5] Introduce "ALL SEQUENCES" support for PostgreSQL
 logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 src/backend/catalog/pg_publication.c      |  40 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   8 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 556 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 715 insertions(+), 335 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..617ed0b82c9 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1061,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1083,6 +1120,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 0b5652071d1..3d7b9bec86c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19638,6 +19671,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 37432e66efd..6ca8cd344a8 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7417eab6aef..1f9bd58a4e2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -661,6 +661,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 386e21e0c59..25e4b0583e5 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3273,6 +3273,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 1d08268393e..3d38f32f6ab 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec65ab79fec..3dc84074e63 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3530,12 +3530,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -164,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4610fc61293..9b9656dd6e3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4253,6 +4253,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4260,6 +4276,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..c128322be05 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..97ea0f593b9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a8346cda633..2351d9644f7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2339,6 +2339,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

#233vignesh C
vignesh21@gmail.com
In reply to: Ajin Cherian (#231)
Re: Logical Replication of sequences
Some review comments for patch 0001:
1. In src/backend/commands/sequence.c
in pg_sequence_state()
+ /* open and lock sequence */
+ init_sequence(seq_relid, &elm, &seqrel);
+
+ if (pg_class_aclcheck(elm->relid, GetUserId(),
+  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+ ereport(ERROR,
+ errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied for sequence %s",
+   RelationGetRelationName(seqrel)));
+

How about using aclcheck_error for this, which also supports error messages for specific access errors. Most other objects seem to be using this.
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult, get_relkind_objtype(seqrel->relkind),
RelationGetRelationName(seqrel));

I felt this is ok in this case as it is used similarly in
nextval_internal, currval_oid, lastval, do_setval and
pg_sequence_parameters also

2. in function pg_sequence_state()
+
+ UnlockReleaseBuffer(buf);
+ relation_close(seqrel, NoLock);

Ideally the corresponding close for init_sequence is sequence_close() rather than relation_close()

Fixed

The comment for the same is handled in the v2025029 version patch
attached at [1]/messages/by-id/CALDaNm0ssEaHW8by5kd1=wE7LPMhhBiV6971JbFWsY6Qwp7NMw@mail.gmail.com.
[1]: /messages/by-id/CALDaNm0ssEaHW8by5kd1=wE7LPMhhBiV6971JbFWsY6Qwp7NMw@mail.gmail.com

Regards,
Vignesh

#234shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#232)
Re: Logical Replication of sequences

On Thu, May 29, 2025 at 8:09 PM vignesh C <vignesh21@gmail.com> wrote:

These comments are handled in the attached v2025029 version patch.

Thanks for the patches. I am still reviewing but please find few comments:

1)
<para>
Only persistent sequences are included in the publication. Temporary
sequences are excluded from the publication.
</para>

We shall mention UNLOGGED sequences as well along with TEMP sequences.

2)
Why do we have GetAllSequencesPublicationRelations() in patch002? It
is used only in patch004. Same thing with is_publishable_class()
change.

3)
process_syncing_tables_for_sync() is renamed to ProcessSyncingTablesForSync()
process_syncing_tables_for_apply() is renamed to ProcessSyncingTablesForApply()
process_syncing_tables() is renamed to SyncProcessRelations()

Why have we named it SyncProcessRelations and not
ProcessSyncingRelations? Is it because we want to start a name with
'Sync' in order to have file name initials? But do not see other files
following it. IMO, ProcessSyncingTables looks more familiar and apt
over SyncProcessRelations. Same with 'SyncFinishWorker'.
FinishSyncWorker instead looks better. Thoughts?

4)
postgres=# CREATE publication pub1 for sequences;
ERROR: invalid publication object list
LINE 1: CREATE publication pub1 for sequences;
^
DETAIL: One of TABLE or TABLES IN SCHEMA must be specified before a
standalone table or schema name.

Do you think we shall mention sequence specific info as well in DETAIL now?

thanks
Shveta

#235Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#232)
Re: Logical Replication of sequences

On Thu, May 29, 2025 at 8:09 PM vignesh C <vignesh21@gmail.com> wrote:

These comments are handled in the attached v2025029 version patch.

1. The current syntax to publish sequences is:
CREATE PUBLICATION pub1 FOR ALL TABLES, ALL SEQUENCES;

The other alternative could be:
CREATE PUBLICATION pub1 FOR ALL TABLES, SEQUENCES;

I think the syntax proposed by the patch is better because of the
following reasons: (a) The use of ALL before both objects makes the
intent explicit and symmetrical. (b) As we are planning to support FOR
TABLE t1, SEQUENCE s1, so ALL SEQUENCES fit naturally as a counterpart
to ALL TABLES.

Please let me know if anyone thinks otherwise.

2. The 0001 patch has the following commit message:
"Introduce pg_sequence_state function for enhanced sequence
management. This patch introduces a new function, 'pg_sequence_state',
which
allows retrieval of sequence values, including the associated LSN."

It doesn't mention the use of introducing this new function.

Following comments on the 0004 patch
3.
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+ bool all_states)

What is the need to change the not_ready flag? Can't not_ready serve the need?

4.
+void
+SetSequence(Oid relid, int64 next, bool is_called, int64 log_cnt)

The argument order seems odd. Isn't it better to keep 'int64 log_cnt'
next to 'int64 next' and then a bool at the end?

5.
+ /*
+ * XXX: If the subscription is for a sequence-only publication, creating
+ * this origin is unnecessary. It can be created later during the ALTER
+ * SUBSCRIPTION ... REFRESH command, if the publication is updated to
+ * include tables.
+ */

Can you explain in comments why creating an origin is unnecessary? The
same is true for a similar comment on slots. If the patch has
explained it somewhere else, then mention a reference to that.

6. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES

Can we move the implementation of the above command to a separate
patch? This is to make 0004 shorter and easier to review.

--
With Regards,
Amit Kapila.

#236Nisha Moond
nisha.moond412@gmail.com
In reply to: Amit Kapila (#235)
6 attachment(s)
Re: Logical Replication of sequences

On Tue, Jun 3, 2025 at 5:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, May 29, 2025 at 8:09 PM vignesh C <vignesh21@gmail.com> wrote:

These comments are handled in the attached v2025029 version patch.

1. The current syntax to publish sequences is:
CREATE PUBLICATION pub1 FOR ALL TABLES, ALL SEQUENCES;

The other alternative could be:
CREATE PUBLICATION pub1 FOR ALL TABLES, SEQUENCES;

I think the syntax proposed by the patch is better because of the
following reasons: (a) The use of ALL before both objects makes the
intent explicit and symmetrical. (b) As we are planning to support FOR
TABLE t1, SEQUENCE s1, so ALL SEQUENCES fit naturally as a counterpart
to ALL TABLES.

Please let me know if anyone thinks otherwise.

+1

6. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES

Can we move the implementation of the above command to a separate
patch? This is to make 0004 shorter and easier to review.

Splitted patch-0004 as follows:
- patch-0004: Implements the new REFRESH PUBLICATION SEQUENCES command.
- patch-0005: Implementation of the sequencesync worker.

Attached patches address feedback from Amit [1]/messages/by-id/CAJpy0uD00JCsgDxL3YjdPQFSnV4mv4D9XPZV_9=aMNDLao7SQQ@mail.gmail.com and Shveta [2]/messages/by-id/CAA4eK1+6L+AoGS3LHdnYnCE=nRHergSQyhyO7Y=-sOp7isGVMw@mail.gmail.com.

[1]: /messages/by-id/CAJpy0uD00JCsgDxL3YjdPQFSnV4mv4D9XPZV_9=aMNDLao7SQQ@mail.gmail.com
[2]: /messages/by-id/CAA4eK1+6L+AoGS3LHdnYnCE=nRHergSQyhyO7Y=-sOp7isGVMw@mail.gmail.com

--
Thanks,
Nisha

Attachments:

v20250610-0001-Introduce-pg_sequence_state-function-for-e.patchapplication/x-patch; name=v20250610-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 6fa349fc43f1579a679f0b2da688c2d5001fc0ce Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250610 1/6] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
In subsequent patches, this function will be used to fetch the
sequence states from the publisher in order to synchronize them on
the subscriber.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index c67688cbf5f..9a442df3ba5 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19935,6 +19935,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..cf357650a24 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	sequence_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d3d28a263fa..eacb553075e 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.34.1

v20250610-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/x-patch; name=v20250610-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 6f08269b286ea8ec35a7f4314c47d9142454876c Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:15:34 +0530
Subject: [PATCH v20250610 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 src/backend/catalog/pg_publication.c      |   4 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 560 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 680 insertions(+), 337 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..dc3f9ed3fbf 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1084,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 0b5652071d1..958ff4c226a 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -204,6 +204,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10614,7 +10620,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10634,13 +10645,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10752,6 +10763,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19638,6 +19671,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
@@ -19657,7 +19731,7 @@ preprocess_pubobj_list(List *pubobjspec_list, core_yyscan_t yyscanner)
 		ereport(ERROR,
 				errcode(ERRCODE_SYNTAX_ERROR),
 				errmsg("invalid publication object list"),
-				errdetail("One of TABLE or TABLES IN SCHEMA must be specified before a standalone table or schema name."),
+				errdetail("One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be specified before a standalone table or schema name."),
 				parser_errposition(pubobj->location));
 
 	foreach(cell, pubobjspec_list)
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 37432e66efd..8453a3670f3 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7417eab6aef..1f9bd58a4e2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -661,6 +661,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 386e21e0c59..09a54e310ce 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3273,6 +3273,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 24e0100c9f0..5859443d0aa 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1757,28 +1757,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1787,22 +1778,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1812,6 +1796,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1843,32 +1880,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6397,7 +6455,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6414,13 +6472,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6531,6 +6596,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6545,6 +6611,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6554,7 +6621,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6629,6 +6707,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6642,6 +6722,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6654,15 +6736,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec65ab79fec..3dc84074e63 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3530,12 +3530,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..843fe784d64 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index dd00ab420b8..03ec92d2098 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4258,6 +4258,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4265,6 +4281,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..ffae6b5ded2 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1453,7 +1525,7 @@ CREATE PUBLICATION testpub_forschema1 FOR CURRENT_SCHEMA;
 ERROR:  invalid publication object list
 LINE 1: CREATE PUBLICATION testpub_forschema1 FOR CURRENT_SCHEMA;
                                                   ^
-DETAIL:  One of TABLE or TABLES IN SCHEMA must be specified before a standalone table or schema name.
+DETAIL:  One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be specified before a standalone table or schema name.
 -- check create publication on CURRENT_SCHEMA along with FOR TABLE
 CREATE PUBLICATION testpub_forschema1 FOR TABLE CURRENT_SCHEMA;
 ERROR:  syntax error at or near "CURRENT_SCHEMA"
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1732,7 +1804,7 @@ CREATE PUBLICATION testpub_error FOR pub_test2.tbl1;
 ERROR:  invalid publication object list
 LINE 1: CREATE PUBLICATION testpub_error FOR pub_test2.tbl1;
                                              ^
-DETAIL:  One of TABLE or TABLES IN SCHEMA must be specified before a standalone table or schema name.
+DETAIL:  One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be specified before a standalone table or schema name.
 DROP VIEW testpub_view;
 DROP PUBLICATION testpub_default;
 DROP PUBLICATION testpub_ins_trunct;
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..1f7b7ebe4d5 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a8346cda633..2351d9644f7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2339,6 +2339,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20250610-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/x-patch; name=v20250610-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 6d4c89546fbd39a451832dc3db2568dc8f452490 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250610 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..e8bbce141b7
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..f6111f94340 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index a23262957ac..f730578c219 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1030,7 +1030,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1152,7 +1152,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1208,7 +1208,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1274,7 +1274,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1409,7 +1409,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2251,7 +2251,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3728,7 +3728,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4805,7 +4805,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..5394fbc4afe 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 2351d9644f7..36af636a4c0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2899,7 +2899,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

v20250610-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/x-patch; name=v20250610-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From b825cd6549fbbfb1b3d02fc7ff05c38ac1019d1e Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 9 Jun 2025 20:18:54 +0530
Subject: [PATCH v20250610 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 322 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/pg_dump/common.c                    |   4 +-
 src/bin/pg_dump/pg_dump.c                   |   8 +-
 src/bin/pg_dump/pg_dump.h                   |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 17 files changed, 444 insertions(+), 95 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index dc3f9ed3fbf..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1062,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1334,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..37bf385bb60 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables, if not_ready is false get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if not_ready is false get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 08f780a2e63..9853fd50b35 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..32c77ad372c 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * a replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +713,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +727,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +742,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +768,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +797,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +836,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +897,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +922,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +951,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +976,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +993,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1011,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1057,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1497,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1512,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1553,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1572,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1596,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1609,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				break;
+			}
+
+			case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1773,7 +1893,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2247,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2432,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 958ff4c226a..fbd0188cd78 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10894,11 +10894,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index e8bbce141b7..db15051f47b 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ SyncFetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8453a3670f3..f038f6ff3e9 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 1f9bd58a4e2..e648adb8a0e 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -818,6 +818,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 3dc84074e63..1206c515a0a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index eacb553075e..fa824499fa2 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12258,6 +12258,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 843fe784d64..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 03ec92d2098..4dee92f4089 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4326,7 +4326,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..9623240915c 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2171,6 +2179,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2181,7 +2190,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.34.1

v20250610-0005-New-worker-for-sequence-synchronization-du.patchapplication/x-patch; name=v20250610-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 6934332931e4fa11fa2b08e5ccdb5e6f3c6430ee Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 9 Jun 2025 17:45:24 +0530
Subject: [PATCH v20250610 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 628 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  70 +-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 226 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 23 files changed, 1157 insertions(+), 109 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 9853fd50b35..dde8a71b84d 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1396,6 +1396,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf357650a24..60035e1979b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 32c77ad372c..2b5dff21e44 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1068,7 +1068,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 					sub_remove_rels[remove_rel_len].relid = relid;
 					sub_remove_rels[remove_rel_len++].state = state;
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -1870,7 +1870,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 1c3c051403d..aa8f99e29aa 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..f402f9aca2f
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,628 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_msg = makeStringInfo();
+
+	appendStringInfo(combined_error_msg, "logical replication sequence synchronization failed for subscription \"%s\":",
+					 MySubscription->name);
+
+	if (missing_seqs->len)
+		appendStringInfo(combined_error_msg, " sequences (%s) are missing on the publisher.",
+						 missing_seqs->data);
+
+	if (mismatched_seqs->len)
+	{
+		/* Add a separator if both types of errors exist */
+		if (missing_seqs->len)
+			appendStringInfoString(combined_error_msg, " Additionally,");
+
+		appendStringInfo(combined_error_msg, " parameters differ for the remote and local sequences (%s)",
+						 mismatched_seqs->data);
+	}
+
+	ereport(ERROR, errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("%s", combined_error_msg->data));
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid)
+{
+	int			total_seqs = list_length(remotesequences);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *sequence_info = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+				if (!strcmp(sequence_info->nspname, nspname) &&
+					!strcmp(sequence_info->seqname, seqname))
+					seqinfo = sequence_info;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s\" has finished",
+											MySubscription->name,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfoData app_name;
+	List	   *remotesequences = NIL;
+	char	   *nspname;
+	char	   *seqname;
+	LogicalRepSequenceInfo *seq_info;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, true);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+
+	StartTransactionCommand();
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = seqinfo->relid;
+		seq_info->remote_seq_fetched = false;
+		remotesequences = lappend(remotesequences, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	copy_sequences(LogRepWorkerWalRcvConn, remotesequences, subid);
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index db15051f47b..5f5770a3908 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,12 +174,14 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index f6111f94340..bea492d4b38 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1516,7 +1516,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1561,7 +1562,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1570,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1584,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f730578c219..c8e38973fc5 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1029,7 +1034,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1151,7 +1159,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1207,7 +1218,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1273,7 +1287,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1408,7 +1425,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2250,7 +2270,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3727,7 +3750,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -4536,7 +4562,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4656,8 +4683,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4736,6 +4763,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4755,14 +4786,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4807,6 +4841,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4833,6 +4870,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4845,9 +4886,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index f04bfedb2fd..ff424159e54 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index fa824499fa2..8eb7b0c061c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5695,9 +5695,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 378f2f2c2ba..03d4df572f4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 5394fbc4afe..7590df29910 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..979756894b7
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,226 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub": parameters differ for the remote and local sequences \("public.regress_s5"\)/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 36af636a4c0..4f0d87b8c00 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1623,6 +1623,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.34.1

v20250610-0006-Documentation-for-sequence-synchronization.patchapplication/x-patch; name=v20250610-0006-Documentation-for-sequence-synchronization.patchDownload
From d31ccc1ac2c659347a735a4dd7d5bfb33abf81be Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250610 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 244 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  84 ++++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 8 files changed, 455 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index fa86c569dc4..7d7571a995c 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8149,16 +8149,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8192,7 +8195,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8201,12 +8204,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 021153b2a5f..751edee00f5 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5184,9 +5184,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5327,8 +5327,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5351,10 +5351,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 686dd441d02..cc5cd986d92 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1711,6 +1715,204 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged listing all differing
+     sequences before the process exits. The apply worker detects the failure
+     and repeatedly respawns the sequence synchronization worker to continue
+     the synchronization process until all differences are resolved. See also
+     <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2040,16 +2242,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2367,8 +2572,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2381,8 +2586,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..8fa27144da8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..8c794d9b8d0 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,20 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | ALL <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +126,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +157,31 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -188,6 +209,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           for logical replication does not take this parameter into account when
           copying existing table data.
          </para>
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -224,6 +248,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           See <xref linkend="logical-replication-gencols"/> for more details about
           logical replication of generated columns.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -259,6 +287,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           If this is enabled, <literal>TRUNCATE</literal> operations performed
           directly on partitions are not replicated.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
       </variablelist></para>
@@ -279,10 +311,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +330,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +482,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 986ae1f543d..e02cc1e7c5a 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

#237Nisha Moond
nisha.moond412@gmail.com
In reply to: Nisha Moond (#236)
1 attachment(s)
Re: Logical Replication of sequences

Hi,

Here are my review comments for v20250610 patches:

Patch-0005:sequencesync.c

1) report_error_sequences()

In case there are both missing and mismatched sequences, the ERROR
message logged is -

```
2025-05-28 14:22:19.898 IST [392259] ERROR: logical replication
sequence synchronization failed for subscription "subs": sequences
("public"."n84") are missing on the publisher. Additionally,
parameters differ for the remote and local sequences ("public.n1")
```

I feel this error message is quite long. Would it be possible to split
it into ERROR and DETAIL? Also, if feasible, we could consider
including a HINT, as was done in previous versions.

I explored a few possible ways to log this error with a hint. Attached
top-up patch has the suggestion implemented. Please see if it seems
okay to consider.
~~~

2) copy_sequences():
+ /* Retrieve the sequence object fetched from the publisher */
+ for (int i = 0; i < batch_size; i++)
+ {
+ LogicalRepSequenceInfo *sequence_info =
lfirst(list_nth_cell(remotesequences, current_index + i));
+
+ if (!strcmp(sequence_info->nspname, nspname) &&
+ !strcmp(sequence_info->seqname, seqname))
+ seqinfo = sequence_info;
+ }

The current logic performs a search through the local sequence list
for each sequence fetched from the publisher, repeating the traverse
of 100(batch size) length of the list per sequence, which may impact
performance.

To improve efficiency, we can optimize it by sorting the local list
and traverses it only once for matching. Kindly review the
implementation in the attached top-up patch and consider merging it if
it looks good to you.
~~~

3) copy_sequences():
+ if (message_level_is_interesting(DEBUG1))
+ ereport(DEBUG1,
+ errmsg_internal("logical replication synchronization for
subscription \"%s\", sequence \"%s\" has finished",
+ MySubscription->name,
+ seqinfo->seqname));
+
+ batch_succeeded_count++;
+ }

The current debug log might be a bit confusing when sequences with the
same name exist in different schemas. To improve clarity, we could
include the schema name in the message, e.g.,
" ... sequence "schema"."sequence" has finished".
~~~~

Few minor comments in doc - Patch-0006 : logical-replication.sgml

4)
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>

I think it would be better to use "To synchronize" instead of "To
replicate" here, to maintain consistency and avoid confusion between
replication and synchronization.

5)
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>

/side few /side a few /

6) Can avoid using multiple "or" in the sentences below:

6a)
-   a change set or replication set.  Each publication exists in only
one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set

/ table or a group of tables/ table, a group of tables/

6b)
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL
TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of

/ IN SCHEMA</literal>, or <literal>FOR ALL TABLES/ IN
SCHEMA</literal>, <literal>FOR ALL TABLES

--
Thanks,
Nisha

Attachments:

vnisha_005-top-up-patch-for-error-message-improvment.patchapplication/octet-stream; name=vnisha_005-top-up-patch-for-error-message-improvment.patchDownload
From 3dd7e00cb707f97be5095b6056314e1f06fb49f7 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Thu, 12 Jun 2025 15:30:00 +0530
Subject: [PATCH v20250612 6/7] top up patch for error message improvment and
 list sort

---
 .../replication/logical/sequencesync.c        | 82 +++++++++++++++----
 1 file changed, 67 insertions(+), 15 deletions(-)

diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index f402f9aca2f..6fa5b7ebcc8 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -184,27 +184,69 @@ ProcessSyncingSequencesForApply(void)
 static void
 report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
 {
-	StringInfo	combined_error_msg = makeStringInfo();
-
-	appendStringInfo(combined_error_msg, "logical replication sequence synchronization failed for subscription \"%s\":",
-					 MySubscription->name);
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
 
 	if (missing_seqs->len)
-		appendStringInfo(combined_error_msg, " sequences (%s) are missing on the publisher.",
-						 missing_seqs->data);
+	{
+		appendStringInfo(combined_error_detail, "Sequences (%s) are missing on the publisher.",
+ 						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION or use ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.");				 
+	}
 
 	if (mismatched_seqs->len)
 	{
 		/* Add a separator if both types of errors exist */
 		if (missing_seqs->len)
-			appendStringInfoString(combined_error_msg, " Additionally,");
-
-		appendStringInfo(combined_error_msg, " parameters differ for the remote and local sequences (%s)",
-						 mismatched_seqs->data);
+		{
+			appendStringInfo(combined_error_detail, " Additionally, parameters differ for the remote and local sequences (%s).",
+								   mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint," Alter or re-create local sequences to have the same parameters as the remote sequences");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Parameters differ for the remote and local sequences (%s).",
+ 							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "Alter or re-create local sequences to have the same parameters as the remote sequences.");
+		}
+		
 	}
 
 	ereport(ERROR, errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-			errmsg("%s", combined_error_msg->data));
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *)(s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *)(s2->ptr_value);
+
+	/* Compare by namespace name first */
+	if (seqinfo1->nspname == NULL && seqinfo2->nspname == NULL)
+		return 0;
+
+	if (seqinfo1->nspname == NULL)
+		return -1;
+
+	if (seqinfo2->nspname == NULL)
+		return 1;
+
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
 }
 
 /*
@@ -218,6 +260,7 @@ copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid)
 {
 	int			total_seqs = list_length(remotesequences);
 	int			current_index = 0;
+	int 		search_pos = 0;
 	StringInfo	mismatched_seqs = makeStringInfo();
 	StringInfo	missing_seqs = makeStringInfo();
 
@@ -225,6 +268,9 @@ copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid)
 			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
 				   MySubscription->name, total_seqs));
 
+	/* Sort the list of sequences to optimize the search */
+	list_sort(remotesequences, sequence_comparator);
+
 	/*
 	 * We batch synchronize multiple sequences per transaction, because the
 	 * alternative of synchronizing each sequence individually incurs overhead
@@ -274,7 +320,8 @@ copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid)
 						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
 						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
 						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
-						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n",
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "ORDER BY s.schname, s.seqname\n",
 						 seqstr->data);
 
 		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
@@ -349,13 +396,18 @@ copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid)
 			Assert(col == REMOTE_SEQ_COL_COUNT);
 
 			/* Retrieve the sequence object fetched from the publisher */
-			for (int i = 0; i < batch_size; i++)
+			while (search_pos < total_seqs)
 			{
-				LogicalRepSequenceInfo *sequence_info = lfirst(list_nth_cell(remotesequences, current_index + i));
+				LogicalRepSequenceInfo *sequence_info = lfirst(list_nth_cell(remotesequences, search_pos));
 
 				if (!strcmp(sequence_info->nspname, nspname) &&
 					!strcmp(sequence_info->seqname, seqname))
-					seqinfo = sequence_info;
+					{
+						seqinfo = sequence_info;
+						search_pos++;
+						break;
+					}
+				search_pos++;
 			}
 
 			Assert(seqinfo);
-- 
2.34.1

#238vignesh C
vignesh21@gmail.com
In reply to: Nisha Moond (#237)
6 attachment(s)
Re: Logical Replication of sequences

On Thu, 19 Jun 2025 at 11:26, Nisha Moond <nisha.moond412@gmail.com> wrote:

Hi,

Here are my review comments for v20250610 patches:

Patch-0005:sequencesync.c

1) report_error_sequences()

In case there are both missing and mismatched sequences, the ERROR
message logged is -

```
2025-05-28 14:22:19.898 IST [392259] ERROR: logical replication
sequence synchronization failed for subscription "subs": sequences
("public"."n84") are missing on the publisher. Additionally,
parameters differ for the remote and local sequences ("public.n1")
```

I feel this error message is quite long. Would it be possible to split
it into ERROR and DETAIL? Also, if feasible, we could consider
including a HINT, as was done in previous versions.

I explored a few possible ways to log this error with a hint. Attached
top-up patch has the suggestion implemented. Please see if it seems
okay to consider.

This looks good, merged it.

~~~

2) copy_sequences():
+ /* Retrieve the sequence object fetched from the publisher */
+ for (int i = 0; i < batch_size; i++)
+ {
+ LogicalRepSequenceInfo *sequence_info =
lfirst(list_nth_cell(remotesequences, current_index + i));
+
+ if (!strcmp(sequence_info->nspname, nspname) &&
+ !strcmp(sequence_info->seqname, seqname))
+ seqinfo = sequence_info;
+ }

The current logic performs a search through the local sequence list
for each sequence fetched from the publisher, repeating the traverse
of 100(batch size) length of the list per sequence, which may impact
performance.

To improve efficiency, we can optimize it by sorting the local list
and traverses it only once for matching. Kindly review the
implementation in the attached top-up patch and consider merging it if
it looks good to you.

Looks good, merged it.

~~~

3) copy_sequences():
+ if (message_level_is_interesting(DEBUG1))
+ ereport(DEBUG1,
+ errmsg_internal("logical replication synchronization for
subscription \"%s\", sequence \"%s\" has finished",
+ MySubscription->name,
+ seqinfo->seqname));
+
+ batch_succeeded_count++;
+ }

The current debug log might be a bit confusing when sequences with the
same name exist in different schemas. To improve clarity, we could
include the schema name in the message, e.g.,
" ... sequence "schema"."sequence" has finished".

Modified

~~~~

Few minor comments in doc - Patch-0006 : logical-replication.sgml

4)
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>

I think it would be better to use "To synchronize" instead of "To
replicate" here, to maintain consistency and avoid confusion between
replication and synchronization.

Modified

5)
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>

/side few /side a few /

6) Can avoid using multiple "or" in the sentences below:

6a)
-   a change set or replication set.  Each publication exists in only
one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set

/ table or a group of tables/ table, a group of tables/

Modified

6b)
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL
TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of

/ IN SCHEMA</literal>, or <literal>FOR ALL TABLES/ IN
SCHEMA</literal>, <literal>FOR ALL TABLES

Modified

Thanks for the comment, the attached v20250622 version patch has the
changes for the same.

Regards,
Vignesh

Attachments:

v20250622-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250622-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 723d26a8192cac95ff7f31874b56df2c3d71c416 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 20 Sep 2024 08:45:21 +0530
Subject: [PATCH v20250622 1/6] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
In subsequent patches, this function will be used to fetch the
sequence states from the publisher in order to synchronize them on
the subscriber.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index a6d79765c1a..9d677b614b7 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..cf357650a24 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	sequence_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d3d28a263fa..eacb553075e 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '9876', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250622-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250622-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 30290870f1ceee97147997fa4403a98c5022ca05 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:15:34 +0530
Subject: [PATCH v20250622 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
---
 src/backend/catalog/pg_publication.c      |   4 +-
 src/backend/commands/publicationcmds.c    |  52 +-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 560 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 680 insertions(+), 337 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..dc3f9ed3fbf 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1084,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0b23d94c38e..ef13cf618d3 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (!superuser())
+	{
+		if (stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -886,6 +892,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -2019,19 +2027,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 50f53159d58..dbca175b23c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -203,6 +203,10 @@ static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 %}
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +450,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10612,7 +10618,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10632,13 +10643,13 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					preprocess_pub_all_objtype_list($6, &n->for_all_tables, &n->for_all_sequences, yyscanner);
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10750,6 +10761,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19562,6 +19595,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
@@ -19581,7 +19655,7 @@ preprocess_pubobj_list(List *pubobjspec_list, core_yyscan_t yyscanner)
 		ereport(ERROR,
 				errcode(ERRCODE_SYNTAX_ERROR),
 				errmsg("invalid publication object list"),
-				errdetail("One of TABLE or TABLES IN SCHEMA must be specified before a standalone table or schema name."),
+				errdetail("One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be specified before a standalone table or schema name."),
 				parser_errposition(pubobj->location));
 
 	foreach(cell, pubobjspec_list)
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index db944ec2230..de57c369e53 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4390,6 +4390,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4420,9 +4421,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4438,6 +4439,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4458,6 +4460,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4509,8 +4513,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7417eab6aef..1f9bd58a4e2 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -661,6 +661,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 386e21e0c59..09a54e310ce 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3273,6 +3273,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index dd25d2fe7b8..022a92cd0a8 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 2c0b4f28c14..9d2c4e3f481 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3530,12 +3530,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..843fe784d64 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index ba12678d1cb..1f352599137 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4258,6 +4258,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4265,6 +4281,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index cf48ae6d0c2..fb05755449d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4de96c04f9d..ffae6b5ded2 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -36,20 +36,20 @@ LINE 1: ...pub_xxx WITH (publish_generated_columns = stored, publish_ge...
 CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  publish_generated_columns requires a "none" or "stored" value
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -93,10 +93,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -105,20 +105,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -129,10 +129,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -150,10 +150,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -162,10 +162,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -176,10 +176,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -203,10 +203,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -218,24 +218,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...or_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+                                                                ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -251,10 +323,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -269,10 +341,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -301,10 +373,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -317,10 +389,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -336,10 +408,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -347,10 +419,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -383,10 +455,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -396,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -514,10 +586,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -803,10 +875,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -996,10 +1068,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1207,10 +1279,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1250,10 +1322,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1333,10 +1405,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1346,20 +1418,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1375,19 +1447,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1401,44 +1473,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1453,7 +1525,7 @@ CREATE PUBLICATION testpub_forschema1 FOR CURRENT_SCHEMA;
 ERROR:  invalid publication object list
 LINE 1: CREATE PUBLICATION testpub_forschema1 FOR CURRENT_SCHEMA;
                                                   ^
-DETAIL:  One of TABLE or TABLES IN SCHEMA must be specified before a standalone table or schema name.
+DETAIL:  One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be specified before a standalone table or schema name.
 -- check create publication on CURRENT_SCHEMA along with FOR TABLE
 CREATE PUBLICATION testpub_forschema1 FOR TABLE CURRENT_SCHEMA;
 ERROR:  syntax error at or near "CURRENT_SCHEMA"
@@ -1472,10 +1544,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1483,20 +1555,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1504,10 +1576,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1516,10 +1588,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1528,10 +1600,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1539,10 +1611,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1550,10 +1622,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1561,29 +1633,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1592,10 +1664,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1604,10 +1676,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1686,18 +1758,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1707,20 +1779,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1732,7 +1804,7 @@ CREATE PUBLICATION testpub_error FOR pub_test2.tbl1;
 ERROR:  invalid publication object list
 LINE 1: CREATE PUBLICATION testpub_error FOR pub_test2.tbl1;
                                              ^
-DETAIL:  One of TABLE or TABLES IN SCHEMA must be specified before a standalone table or schema name.
+DETAIL:  One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be specified before a standalone table or schema name.
 DROP VIEW testpub_view;
 DROP PUBLICATION testpub_default;
 DROP PUBLICATION testpub_ins_trunct;
@@ -1842,26 +1914,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1945,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 68001de4000..1f7b7ebe4d5 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 32d6e718adc..933371f2680 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2343,6 +2343,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250622-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250622-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From d1baeef6167c2183ec3da16280a22b8c055bcbdb Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250622 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..e8bbce141b7
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8e1e8762f62..f6111f94340 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -659,37 +607,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1326,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1567,77 +1484,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1723,7 +1569,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1741,7 +1587,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index a23262957ac..f730578c219 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1030,7 +1030,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1152,7 +1152,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1208,7 +1208,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1274,7 +1274,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1409,7 +1409,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2251,7 +2251,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3728,7 +3728,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4805,7 +4805,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..5394fbc4afe 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 933371f2680..6ba83e95fd7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2905,7 +2905,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250622-0005-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250622-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 2283986c59b09f957588c1086b0348a3400d4d3a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 9 Jun 2025 17:45:24 +0530
Subject: [PATCH v20250622 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 672 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  70 +-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 226 ++++++
 src/tools/pgindent/typedefs.list              |   1 +
 23 files changed, 1201 insertions(+), 109 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 9853fd50b35..dde8a71b84d 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1396,6 +1396,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf357650a24..60035e1979b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 32c77ad372c..2b5dff21e44 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1068,7 +1068,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 					sub_remove_rels[remove_rel_len].relid = relid;
 					sub_remove_rels[remove_rel_len++].state = state;
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -1870,7 +1870,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 1c3c051403d..aa8f99e29aa 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -226,19 +226,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -248,7 +247,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -308,6 +307,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -393,7 +393,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -479,8 +480,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -603,13 +612,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -676,7 +685,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -806,6 +815,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -854,7 +894,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1169,7 +1209,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1305,7 +1345,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1345,6 +1385,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..6a352c08481
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,672 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Sequences (%s) are missing on the publisher.",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION or use ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, " Additionally, parameters differ for the remote and local sequences (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " Alter or re-create local sequences to have the same parameters as the remote sequences");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Parameters differ for the remote and local sequences (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "Alter or re-create local sequences to have the same parameters as the remote sequences.");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid)
+{
+	int			total_seqs = list_length(remotesequences);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(remotesequences, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *sequence_info = lfirst(list_nth_cell(remotesequences, search_pos));
+
+				if (!strcmp(sequence_info->nspname, nspname) &&
+					!strcmp(sequence_info->seqname, seqname))
+				{
+					seqinfo = sequence_info;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfoData app_name;
+	List	   *remotesequences = NIL;
+	char	   *nspname;
+	char	   *seqname;
+	LogicalRepSequenceInfo *seq_info;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, true);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("could not connect to the publisher: %s", err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+
+	StartTransactionCommand();
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = seqinfo->relid;
+		seq_info->remote_seq_fetched = false;
+		remotesequences = lappend(remotesequences, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	copy_sequences(LogRepWorkerWalRcvConn, remotesequences, subid);
+
+	list_free_deep(sequences_not_synced);
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index db15051f47b..5f5770a3908 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,12 +174,14 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index f6111f94340..bea492d4b38 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1243,7 +1243,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1516,7 +1516,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1561,7 +1562,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1569,7 +1570,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1583,23 +1584,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f730578c219..c8e38973fc5 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -489,6 +489,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1029,7 +1034,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1151,7 +1159,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1207,7 +1218,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1273,7 +1287,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1408,7 +1425,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2250,7 +2270,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3727,7 +3750,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -4536,7 +4562,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4656,8 +4683,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4736,6 +4763,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4755,14 +4786,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4807,6 +4841,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4833,6 +4870,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4845,9 +4886,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index f04bfedb2fd..ff424159e54 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index fa824499fa2..8eb7b0c061c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5695,9 +5695,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 378f2f2c2ba..03d4df572f4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 5394fbc4afe..7590df29910 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..0d8f19993f2
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,226 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Parameters differ for the remote and local sequences \("public.regress_s5"\)/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6ba83e95fd7..507093be502 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250622-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250622-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From ea4ef24b42fe6aa102c6c0f6cc6d0e4ed1cba711 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 9 Jun 2025 20:18:54 +0530
Subject: [PATCH v20250622 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 322 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/pg_dump/common.c                    |   4 +-
 src/bin/pg_dump/pg_dump.c                   |   8 +-
 src/bin/pg_dump/pg_dump.h                   |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 17 files changed, 444 insertions(+), 95 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index dc3f9ed3fbf..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1062,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1334,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..37bf385bb60 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables, if not_ready is false get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if not_ready is false get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 08f780a2e63..9853fd50b35 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4aec73bcc6b..32c77ad372c 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating
+	 * a replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +713,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +727,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +742,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +768,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +797,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +836,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +897,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,10 +922,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
 		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
@@ -880,9 +951,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -904,12 +976,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,8 +993,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -937,11 +1011,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1057,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1497,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1512,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1553,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1572,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1596,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1609,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				break;
+			}
+
+			case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1773,7 +1893,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2247,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2432,63 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dbca175b23c..30c6c3b2281 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10892,11 +10892,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index e8bbce141b7..db15051f47b 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ SyncFetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index de57c369e53..3fdb99bcd90 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5137,12 +5137,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5196,7 +5196,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 1f9bd58a4e2..e648adb8a0e 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -818,6 +818,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 9d2c4e3f481..de850945986 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index eacb553075e..fa824499fa2 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12258,6 +12258,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 843fe784d64..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1f352599137..d91729b8198 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4326,7 +4326,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..9623240915c 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2171,6 +2179,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2181,7 +2190,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250622-0006-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250622-0006-Documentation-for-sequence-synchronization.patchDownload
From 8e9af03f04dd36bf8bf97344ee79032585d6d6e4 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250622 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 244 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  84 ++++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 8 files changed, 455 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index fa86c569dc4..7d7571a995c 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8149,16 +8149,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8192,7 +8195,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8201,12 +8204,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index b265cc89c9d..6400a2c2a83 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5186,9 +5186,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5329,8 +5329,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5353,10 +5353,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index c32e6bc000d..6f67235ec9d 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1711,6 +1715,204 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged listing all differing
+     sequences before the process exits. The apply worker detects the failure
+     and repeatedly respawns the sequence synchronization worker to continue
+     the synchronization process until all differences are resolved. See also
+     <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2040,16 +2242,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2367,8 +2572,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2381,8 +2586,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..8fa27144da8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..8c794d9b8d0 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,20 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | ALL <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +126,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +157,31 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -188,6 +209,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           for logical replication does not take this parameter into account when
           copying existing table data.
          </para>
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -224,6 +248,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           See <xref linkend="logical-replication-gencols"/> for more details about
           logical replication of generated columns.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -259,6 +287,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           If this is enabled, <literal>TRUNCATE</literal> operations performed
           directly on partitions are not replicated.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
       </variablelist></para>
@@ -279,10 +311,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +330,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +482,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 986ae1f543d..e02cc1e7c5a 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#239shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#238)
Re: Logical Replication of sequences

Thanks for the comment, the attached v20250622 version patch has the
changes for the same.

Thanks for the patches, I am not done with review yet, but please find
the feedback so far:

1)
+ if (!OidIsValid(seq_relid))
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("sequence \"%s.%s\" does not exist",
+    schema_name, sequence_name));

ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE might not be a correct error
code here. Shall we have ERRCODE_UNDEFINED_OBJECT? Thoughts?

2)
tab-complete here shows correctly:

postgres=# CREATE PUBLICATION pub6 FOR ALL
SEQUENCES TABLES

But tab-complete for these 2 commands does not show anything:

postgres=# CREATE PUBLICATION pub6 FOR ALL TABLES,
postgres=# CREATE PUBLICATION pub6 FOR ALL SEQUENCES,

We shall specify SEQUENCES/TABLES in above commands. IIUC, we do not
support any other combination like (table <name>, tables in schema
<name>) once we get ALL clause in command. So it is safe to display
tab-complete as either TABLES or SEQUENECS in above.

3)
postgres=# CREATE publication pub1 for sequences;
ERROR: invalid publication object list
LINE 1: CREATE publication pub1 for sequences;
^
DETAIL: One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be
specified before a standalone table or schema name.

This message is not correct as we can not have ALL SEQUENCES *before*
a standalone table or schema name. The problem is that gram.y is
taking *sequences* as a table or schema name. I noticed that it does
same with *tables* as well:

postgres=# CREATE publication pub1 for tables;
ERROR: invalid publication object list
LINE 1: CREATE publication pub1 for tables;
^
DETAIL: One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be
specified before a standalone table or schema name.

But since gram.y here can not identify an in-between missing keyword
*all*, thus it is considering it (tables/sequenecs) as a literal/name
instead of keyword. We can revert back to old message in such a case.
I am unable to think of a good solution here.

4)
I think the error here is wrong as we are not trying to specify
multiple all-tables entries.

postgres=# CREATE PUBLICATION pub6 for all tables, tables in schema public;
ERROR: invalid publication object list
LINE 1: CREATE PUBLICATION pub6 for all tables, tables in schema pub...

^
DETAIL: ALL TABLES can be specified only once.

5)
The log messages still has some scope of improvement:

2025-06-24 08:52:17.988 IST [110359] LOG: logical replication
sequence synchronization worker for subscription "sub1" has started
2025-06-24 08:52:18.029 IST [110359] LOG: logical replication
sequence synchronization for subscription "sub1" - total
unsynchronized: 3
2025-06-24 08:52:18.090 IST [110359] LOG: logical replication
sequence synchronization for subscription "sub1" - batch #1 = 3
attempted, 0 succeeded, 2 mismatched, 1 missing
2025-06-24 08:52:18.091 IST [110359] ERROR: logical replication
sequence synchronization failed for subscription "sub1"
2025-06-24 08:52:18.091 IST [110359] DETAIL: Sequences
("public.myseq100") are missing on the publisher. Additionally,
parameters differ for the remote and local sequences
("public.myseq101", "public.myseq102").
2025-06-24 08:52:18.091 IST [110359] HINT: Use ALTER SUBSCRIPTION ...
REFRESH PUBLICATION or use ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES. Alter or re-create local sequences to have the same
parameters as the remote sequences

a)
Sequences ("public.myseq100") are missing on the publisher.
Additionally, parameters differ for the remote and local sequences
("public.myseq101", "public.myseq102").

Shall we change this to:
missing sequence(s) on publisher: ("public.myseq100"); mismatched
sequences(s) on subscriber: ("public.myseq101", "public.myseq102")

It will then be similar to the previous log pattern ( 3 attempted, 0
succeeded etc) instead of being more verbal. Thoughts?

b)
Hints are a little odd. First line of hint is just saying to use
'ALTER SUBSCRIPTION' without giving any purpose. While the second line
of hint is giving the purpose of alter-sequences.

Shall we have?
For missing sequences, use ALTER SUBSCRIPTION with either REFRESH
PUBLICATION or REFRESH PUBLICATION SEQUENCES
For mismatched sequences, alter or re-create local sequences to have
matching parameters as publishers.

Thoughts?

6)
postgres=# create publication pub1 for table tab1, all sequences;
ERROR: syntax error at or near "all"
LINE 1: create publication pub1 for table tab1, all sequences;

We can mention in commit msg that this combination is also not
supported or what all combinations are supported. Currently it is not
clear f

thanks
Shveta

#240Nisha Moond
nisha.moond412@gmail.com
In reply to: vignesh C (#238)
Re: Logical Replication of sequences

On Sun, Jun 22, 2025 at 8:05 AM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comment, the attached v20250622 version patch has the
changes for the same.

Thanks for the patches, please find my review comments for patches 001 and 002:

1) patch-001 :pg_sequence_state()

+ /* open and lock sequence */
+ init_sequence(seq_relid, &elm, &seqrel);

/ open/ Open
~~~

patch-0002:

2)
- /* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
- ereport(ERROR,
- (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- errmsg("must be superuser to create FOR ALL TABLES publication")));
+ if (!superuser())

I think we can retain the original comment here with required modification :
/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
~~~

3) ALL TABLES vs ALL SEQUENCES

For the command:
CREATE PUBLICATION FOR ALL TABLES, [...]
- only "SEQUENCES" is allowed after ',', and TABLE or TABLES IN
SCHEMA are not allowed. Which aligns with the fact that "all tables"
is inclusive of "FOR TABLE" and "FOR TABLES IN SCHEMA".
Therefore, adding and dropping of tables from a "ALL TABLES"
publication is also not allowed. e.g.,

```
postgres=# alter publication test add table t1;
ERROR: publication "test" is defined as FOR ALL TABLES
DETAIL: Tables cannot be added to or dropped from FOR ALL TABLES publications.

postgres=# alter publication test add tables in schema public;
ERROR: publication "test" is defined as FOR ALL TABLES
DETAIL: Schemas cannot be added to or dropped from FOR ALL TABLES publications.
```

However, for ALL SEQUENCES, the behavior seems inconsistent. CREATE
PUBLICATION doesn’t allow adding TABLE or TABLES IN SCHEMA along with
ALL SEQUENCES, but ALTER PUBLICATION does.
e.g., for a all sequence publication 'pubs', below succeeds :
postgres=# alter publication pubs add table t1;
ALTER PUBLICATION

Is this expected?
If adding tables is allowed using ALTER PUBLICATION, perhaps it should
also be permitted during CREATE PUBLICATION or disallowed in both
cases. Thoughts?
~~~

4) Consider a publication 'pubs' on all sequences and 'n1' is a sequence, now -

postgres=# alter publication pubs drop table n1;
ERROR: relation "n1" is not part of the publication

This error message can be misleading, as the relation n1 is part of
the publication - it's just a sequence, not a table.
It might be more accurate to add a DETAIL line similar to ADD case:

postgres=# alter publication pubs add table n1;
ERROR: cannot add relation "n1" to publication
DETAIL: This operation is not supported for sequences.
~~~

--
Thanks.
Nisha

#241Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: vignesh C (#238)
Re: Logical Replication of sequences

On Sun, 22 Jun 2025 at 08:05, vignesh C <vignesh21@gmail.com> wrote:

On Thu, 19 Jun 2025 at 11:26, Nisha Moond <nisha.moond412@gmail.com> wrote:

Hi,

Here are my review comments for v20250610 patches:

Patch-0005:sequencesync.c

1) report_error_sequences()

In case there are both missing and mismatched sequences, the ERROR
message logged is -

```
2025-05-28 14:22:19.898 IST [392259] ERROR: logical replication
sequence synchronization failed for subscription "subs": sequences
("public"."n84") are missing on the publisher. Additionally,
parameters differ for the remote and local sequences ("public.n1")
```

I feel this error message is quite long. Would it be possible to split
it into ERROR and DETAIL? Also, if feasible, we could consider
including a HINT, as was done in previous versions.

I explored a few possible ways to log this error with a hint. Attached
top-up patch has the suggestion implemented. Please see if it seems
okay to consider.

This looks good, merged it.

~~~

2) copy_sequences():
+ /* Retrieve the sequence object fetched from the publisher */
+ for (int i = 0; i < batch_size; i++)
+ {
+ LogicalRepSequenceInfo *sequence_info =
lfirst(list_nth_cell(remotesequences, current_index + i));
+
+ if (!strcmp(sequence_info->nspname, nspname) &&
+ !strcmp(sequence_info->seqname, seqname))
+ seqinfo = sequence_info;
+ }

The current logic performs a search through the local sequence list
for each sequence fetched from the publisher, repeating the traverse
of 100(batch size) length of the list per sequence, which may impact
performance.

To improve efficiency, we can optimize it by sorting the local list
and traverses it only once for matching. Kindly review the
implementation in the attached top-up patch and consider merging it if
it looks good to you.

Looks good, merged it.

~~~

3) copy_sequences():
+ if (message_level_is_interesting(DEBUG1))
+ ereport(DEBUG1,
+ errmsg_internal("logical replication synchronization for
subscription \"%s\", sequence \"%s\" has finished",
+ MySubscription->name,
+ seqinfo->seqname));
+
+ batch_succeeded_count++;
+ }

The current debug log might be a bit confusing when sequences with the
same name exist in different schemas. To improve clarity, we could
include the schema name in the message, e.g.,
" ... sequence "schema"."sequence" has finished".

Modified

~~~~

Few minor comments in doc - Patch-0006 : logical-replication.sgml

4)
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>

I think it would be better to use "To synchronize" instead of "To
replicate" here, to maintain consistency and avoid confusion between
replication and synchronization.

Modified

5)
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>

/side few /side a few /

6) Can avoid using multiple "or" in the sentences below:

6a)
-   a change set or replication set.  Each publication exists in only
one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set

/ table or a group of tables/ table, a group of tables/

Modified

6b)
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL
TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of

/ IN SCHEMA</literal>, or <literal>FOR ALL TABLES/ IN
SCHEMA</literal>, <literal>FOR ALL TABLES

Modified

Thanks for the comment, the attached v20250622 version patch has the
changes for the same.

Hi Vignesh,

I have reviewed the patches 0001 and 0002. I do not have any comments
for 0001 patch. Here are comments for 0002 patch.

1. Initially, I have created a publication on sequence s1.
postgres=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
CREATE PUBLICATION
postgres=# ALTER PUBLICATION pub1 SET TABLE t1;
ALTER PUBLICATION
postgres=# \d s1
Sequence "public.s1"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
Publications:
"pub1"
postgres=# select * from pg_publication_rel;
oid | prpubid | prrelid | prqual | prattrs
-------+---------+---------+--------+---------
16415 | 16414 | 16388 | |
(1 row)

Here, we can set the publication to TABLE or TABLES FOR SCHEMA. Should
this be allowed?
If a publication is created on FOR ALL TABLES, such an operation is not allowed.

If we decide to allow it, it is currently not handled correctly as we
can still see "pub1" in Publications for sequence s1.

2. Similar to the comment given by Shveta in [1]/messages/by-id/CAJpy0uAY9sCRCkkVn4qbQeU8NMdLEA_wwBCyV0tvYft_sFST7g@mail.gmail.com point 3.
Same behaviour is present for ALTER PUBLICATION

postgres=# ALTER PUBLICATION pub2 ADD SEQUENCES;
ERROR: invalid publication object list
LINE 1: ALTER PUBLICATION pub2 ADD SEQUENCES;
^
DETAIL: One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be
specified before a standalone table or schema name.

postgres=# ALTER PUBLICATION pub2 ADD TABLES;
ERROR: invalid publication object list
LINE 1: ALTER PUBLICATION pub2 ADD TABLES;
^
DETAIL: One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be
specified before a standalone table or schema name.

[1]: /messages/by-id/CAJpy0uAY9sCRCkkVn4qbQeU8NMdLEA_wwBCyV0tvYft_sFST7g@mail.gmail.com

Thanks and Regards,
Shlok Kyal

#242Nisha Moond
nisha.moond412@gmail.com
In reply to: Nisha Moond (#240)
Re: Logical Replication of sequences

On Tue, Jun 24, 2025 at 3:07 PM Nisha Moond <nisha.moond412@gmail.com> wrote:

On Sun, Jun 22, 2025 at 8:05 AM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comment, the attached v20250622 version patch has the
changes for the same.

Thanks for the patches, please find my review comments for patches 001 and 002:

Please find my further comments on patches 004 and 005:
(I've no comments for 006)

patch-004:

5) The new fetch_sequence_list() function should be guarded with version checks.
Without this, CREATE SUBSCRIPTION will always fail when a newer
subscriber (>=PG19) attempts to create a subscription to an older
publisher (<PG19).
e.g., pub1 is a publication on without-patch node, with only tables in it.

postgres=# create subscription sub_oldpub connection 'dbname=postgres
host=localhost port=8841' publication pub1;
ERROR: could not receive list of sequences from the publisher: ERROR:
relation "pg_catalog.pg_publication_sequences" does not exist
LINE 2: FROM pg_catalog.pg_publication_sequences ...
~~~

6)
+ * not_ready:
+ * If getting tables, if not_ready is false get all tables, otherwise
+ * only get tables that have not reached READY state.
+ * If getting sequences, if not_ready is false get all sequences,
+ * otherwise only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).

I feel the above comment could be reworded slightly for better clarity.
Suggestion:

* If getting tables and not_ready is false, get all tables, otherwise,
* only get tables that have not reached READY state.
* If getting sequences and not_ready is false, get all sequences,
* otherwise, only get sequences that have not reached READY state (i.e. are
~~~

patch-005:

7)
+ /*
+ * Establish the connection to the publisher for sequence synchronization.
+ */
+ LogRepWorkerWalRcvConn =
+ walrcv_connect(MySubscription->conninfo, true, true,
+    must_use_password,
+    app_name.data, &err);
+ if (LogRepWorkerWalRcvConn == NULL)
+ ereport(ERROR,
+ errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not connect to the publisher: %s", err));

The error message should mention the specific process or worker that
failed to connect, similar to how it's done for other workers like
slotsync or tablesync.

Suggestion:
errmsg("sequencesync worker for subscription \"%s\" could not connect
to the publisher: %s", MySubscription->name, err));
~~~

8)
+ CommitTransactionCommand();
+
+ copy_sequences(LogRepWorkerWalRcvConn, remotesequences, subid);
+
+ list_free_deep(sequences_not_synced);

Should we also free the 'remotesequences' list here?
~~~

--
Thanks,
Nisha

#243shveta malik
shveta.malik@gmail.com
In reply to: Shlok Kyal (#241)
Re: Logical Replication of sequences

On Tue, Jun 24, 2025 at 6:44 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

1. Initially, I have created a publication on sequence s1.
postgres=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
CREATE PUBLICATION
postgres=# ALTER PUBLICATION pub1 SET TABLE t1;
ALTER PUBLICATION
postgres=# \d s1
Sequence "public.s1"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
Publications:
"pub1"
postgres=# select * from pg_publication_rel;
oid | prpubid | prrelid | prqual | prattrs
-------+---------+---------+--------+---------
16415 | 16414 | 16388 | |
(1 row)

Here, we can set the publication to TABLE or TABLES FOR SCHEMA. Should
this be allowed?
If a publication is created on FOR ALL TABLES, such an operation is not allowed.

Good catch. IMO, this should not be allowed as currently we strictly
support either ALL SEQUENCES or ALL SEQUENCES with ALL TABLES alone.

thanks
Shveta

#244Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: vignesh C (#238)
Re: Logical Replication of sequences

On Sun, 22 Jun 2025 at 08:05, vignesh C <vignesh21@gmail.com> wrote:

On Thu, 19 Jun 2025 at 11:26, Nisha Moond <nisha.moond412@gmail.com> wrote:

Hi,

Here are my review comments for v20250610 patches:

Patch-0005:sequencesync.c

1) report_error_sequences()

In case there are both missing and mismatched sequences, the ERROR
message logged is -

```
2025-05-28 14:22:19.898 IST [392259] ERROR: logical replication
sequence synchronization failed for subscription "subs": sequences
("public"."n84") are missing on the publisher. Additionally,
parameters differ for the remote and local sequences ("public.n1")
```

I feel this error message is quite long. Would it be possible to split
it into ERROR and DETAIL? Also, if feasible, we could consider
including a HINT, as was done in previous versions.

I explored a few possible ways to log this error with a hint. Attached
top-up patch has the suggestion implemented. Please see if it seems
okay to consider.

This looks good, merged it.

~~~

2) copy_sequences():
+ /* Retrieve the sequence object fetched from the publisher */
+ for (int i = 0; i < batch_size; i++)
+ {
+ LogicalRepSequenceInfo *sequence_info =
lfirst(list_nth_cell(remotesequences, current_index + i));
+
+ if (!strcmp(sequence_info->nspname, nspname) &&
+ !strcmp(sequence_info->seqname, seqname))
+ seqinfo = sequence_info;
+ }

The current logic performs a search through the local sequence list
for each sequence fetched from the publisher, repeating the traverse
of 100(batch size) length of the list per sequence, which may impact
performance.

To improve efficiency, we can optimize it by sorting the local list
and traverses it only once for matching. Kindly review the
implementation in the attached top-up patch and consider merging it if
it looks good to you.

Looks good, merged it.

~~~

3) copy_sequences():
+ if (message_level_is_interesting(DEBUG1))
+ ereport(DEBUG1,
+ errmsg_internal("logical replication synchronization for
subscription \"%s\", sequence \"%s\" has finished",
+ MySubscription->name,
+ seqinfo->seqname));
+
+ batch_succeeded_count++;
+ }

The current debug log might be a bit confusing when sequences with the
same name exist in different schemas. To improve clarity, we could
include the schema name in the message, e.g.,
" ... sequence "schema"."sequence" has finished".

Modified

~~~~

Few minor comments in doc - Patch-0006 : logical-replication.sgml

4)
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>

I think it would be better to use "To synchronize" instead of "To
replicate" here, to maintain consistency and avoid confusion between
replication and synchronization.

Modified

5)
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>

/side few /side a few /

6) Can avoid using multiple "or" in the sentences below:

6a)
-   a change set or replication set.  Each publication exists in only
one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set

/ table or a group of tables/ table, a group of tables/

Modified

6b)
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL
TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of

/ IN SCHEMA</literal>, or <literal>FOR ALL TABLES/ IN
SCHEMA</literal>, <literal>FOR ALL TABLES

Modified

Thanks for the comment, the attached v20250622 version patch has the
changes for the same.

Hi Vignesh,

I have reviewed the 0004 patch. Here are my comments:

1. I think we need to update the below comment for function
AlterSubscription_refresh. I feel replacing 'tables' with 'relations'
would be sufficient.
/*
* Build qsorted array of local table oids for faster lookup. This can
* potentially contain all tables in the database so speed of lookup
* is important.
*/

2. Similarly as above comment.
/*
* Walk over the remote tables and try to match them to locally known
* tables. If the table is not known locally create a new state for
* it.
*
* Also builds array of local oids of remote tables for the next step.
*/

3. Similarly as above comment.
/*
* Next remove state for tables we should not care about anymore using
* the data we collected above
*/
Similarly for above comment.

4. Since we are not adding sequences in the list 'sub_remove_rels',
should we only palloc for (the count of no. of tables)? Is it worth
the effort?
/*
* Rels that we want to remove from subscription and drop any slots
* and origins corresponding to them.
*/
sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));

Thanks and Regards,
Shlok Kyal

#245Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: vignesh C (#238)
Re: Logical Replication of sequences

On Sun, 22 Jun 2025 at 08:05, vignesh C <vignesh21@gmail.com> wrote:

On Thu, 19 Jun 2025 at 11:26, Nisha Moond <nisha.moond412@gmail.com> wrote:

Hi,

Here are my review comments for v20250610 patches:

Patch-0005:sequencesync.c

1) report_error_sequences()

In case there are both missing and mismatched sequences, the ERROR
message logged is -

```
2025-05-28 14:22:19.898 IST [392259] ERROR: logical replication
sequence synchronization failed for subscription "subs": sequences
("public"."n84") are missing on the publisher. Additionally,
parameters differ for the remote and local sequences ("public.n1")
```

I feel this error message is quite long. Would it be possible to split
it into ERROR and DETAIL? Also, if feasible, we could consider
including a HINT, as was done in previous versions.

I explored a few possible ways to log this error with a hint. Attached
top-up patch has the suggestion implemented. Please see if it seems
okay to consider.

This looks good, merged it.

~~~

2) copy_sequences():
+ /* Retrieve the sequence object fetched from the publisher */
+ for (int i = 0; i < batch_size; i++)
+ {
+ LogicalRepSequenceInfo *sequence_info =
lfirst(list_nth_cell(remotesequences, current_index + i));
+
+ if (!strcmp(sequence_info->nspname, nspname) &&
+ !strcmp(sequence_info->seqname, seqname))
+ seqinfo = sequence_info;
+ }

The current logic performs a search through the local sequence list
for each sequence fetched from the publisher, repeating the traverse
of 100(batch size) length of the list per sequence, which may impact
performance.

To improve efficiency, we can optimize it by sorting the local list
and traverses it only once for matching. Kindly review the
implementation in the attached top-up patch and consider merging it if
it looks good to you.

Looks good, merged it.

~~~

3) copy_sequences():
+ if (message_level_is_interesting(DEBUG1))
+ ereport(DEBUG1,
+ errmsg_internal("logical replication synchronization for
subscription \"%s\", sequence \"%s\" has finished",
+ MySubscription->name,
+ seqinfo->seqname));
+
+ batch_succeeded_count++;
+ }

The current debug log might be a bit confusing when sequences with the
same name exist in different schemas. To improve clarity, we could
include the schema name in the message, e.g.,
" ... sequence "schema"."sequence" has finished".

Modified

~~~~

Few minor comments in doc - Patch-0006 : logical-replication.sgml

4)
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>

I think it would be better to use "To synchronize" instead of "To
replicate" here, to maintain consistency and avoid confusion between
replication and synchronization.

Modified

5)
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>

/side few /side a few /

6) Can avoid using multiple "or" in the sentences below:

6a)
-   a change set or replication set.  Each publication exists in only
one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set

/ table or a group of tables/ table, a group of tables/

Modified

6b)
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL
TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of

/ IN SCHEMA</literal>, or <literal>FOR ALL TABLES/ IN
SCHEMA</literal>, <literal>FOR ALL TABLES

Modified

Thanks for the comment, the attached v20250622 version patch has the
changes for the same.

Hi Vignesh,

I tested with all patches applied. I have a comment:

Let consider following case:
On publisher create a publication pub1 on all sequence. publication
has sequence s1. The curr value of s1 is 2
and On subscriber we have subscription on pub1 and sequence s1 has
value 5. Now we run:
"ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES"

Now on subscriber currval still show '5':
postgres=# select currval('s1');
currval
---------
5
(1 row)

But when we do nextval on s1 on subscriber we get '3'. Which is
correct assuming sequence is synced:
postgres=# select nextval('s1');
nextval
---------
3
(1 row)
postgres=# select currval('s1');
currval
---------
3
(1 row)

Is this behaviour expected? I feel the initial " select
currval('s1');" should have displayed '2'. Thoughts?

Thanks and Regards,
Shlok Kyal

#246shveta malik
shveta.malik@gmail.com
In reply to: Shlok Kyal (#245)
Re: Logical Replication of sequences

On Wed, Jun 25, 2025 at 7:42 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

On Sun, 22 Jun 2025 at 08:05, vignesh C <vignesh21@gmail.com> wrote:

On Thu, 19 Jun 2025 at 11:26, Nisha Moond <nisha.moond412@gmail.com> wrote:

Hi,

Here are my review comments for v20250610 patches:

Patch-0005:sequencesync.c

1) report_error_sequences()

In case there are both missing and mismatched sequences, the ERROR
message logged is -

```
2025-05-28 14:22:19.898 IST [392259] ERROR: logical replication
sequence synchronization failed for subscription "subs": sequences
("public"."n84") are missing on the publisher. Additionally,
parameters differ for the remote and local sequences ("public.n1")
```

I feel this error message is quite long. Would it be possible to split
it into ERROR and DETAIL? Also, if feasible, we could consider
including a HINT, as was done in previous versions.

I explored a few possible ways to log this error with a hint. Attached
top-up patch has the suggestion implemented. Please see if it seems
okay to consider.

This looks good, merged it.

~~~

2) copy_sequences():
+ /* Retrieve the sequence object fetched from the publisher */
+ for (int i = 0; i < batch_size; i++)
+ {
+ LogicalRepSequenceInfo *sequence_info =
lfirst(list_nth_cell(remotesequences, current_index + i));
+
+ if (!strcmp(sequence_info->nspname, nspname) &&
+ !strcmp(sequence_info->seqname, seqname))
+ seqinfo = sequence_info;
+ }

The current logic performs a search through the local sequence list
for each sequence fetched from the publisher, repeating the traverse
of 100(batch size) length of the list per sequence, which may impact
performance.

To improve efficiency, we can optimize it by sorting the local list
and traverses it only once for matching. Kindly review the
implementation in the attached top-up patch and consider merging it if
it looks good to you.

Looks good, merged it.

~~~

3) copy_sequences():
+ if (message_level_is_interesting(DEBUG1))
+ ereport(DEBUG1,
+ errmsg_internal("logical replication synchronization for
subscription \"%s\", sequence \"%s\" has finished",
+ MySubscription->name,
+ seqinfo->seqname));
+
+ batch_succeeded_count++;
+ }

The current debug log might be a bit confusing when sequences with the
same name exist in different schemas. To improve clarity, we could
include the schema name in the message, e.g.,
" ... sequence "schema"."sequence" has finished".

Modified

~~~~

Few minor comments in doc - Patch-0006 : logical-replication.sgml

4)
+  <para>
+   To replicate sequences from a publisher to a subscriber, first publish them
+   using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>

I think it would be better to use "To synchronize" instead of "To
replicate" here, to maintain consistency and avoid confusion between
replication and synchronization.

Modified

5)
+   <para>
+    Update the sequences at the publisher side few times.
+<programlisting>

/side few /side a few /

6) Can avoid using multiple "or" in the sentences below:

6a)
-   a change set or replication set.  Each publication exists in only
one database.
+   generated from a table or a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set

/ table or a group of tables/ table, a group of tables/

Modified

6b)
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, or <literal>FOR ALL
TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of

/ IN SCHEMA</literal>, or <literal>FOR ALL TABLES/ IN
SCHEMA</literal>, <literal>FOR ALL TABLES

Modified

Thanks for the comment, the attached v20250622 version patch has the
changes for the same.

Hi Vignesh,

I tested with all patches applied. I have a comment:

Let consider following case:
On publisher create a publication pub1 on all sequence. publication
has sequence s1. The curr value of s1 is 2
and On subscriber we have subscription on pub1 and sequence s1 has
value 5. Now we run:
"ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES"

Now on subscriber currval still show '5':
postgres=# select currval('s1');
currval
---------
5
(1 row)

But when we do nextval on s1 on subscriber we get '3'. Which is
correct assuming sequence is synced:
postgres=# select nextval('s1');
nextval
---------
3
(1 row)
postgres=# select currval('s1');
currval
---------
3
(1 row)

Is this behaviour expected? I feel the initial " select
currval('s1');" should have displayed '2'. Thoughts?

As per docs at [1]https://www.postgresql.org/docs/current/functions-sequence.html, currval returns the value most recently obtained
by nextval for this sequence in the current session. So behaviour is
in alignment with docs. I tested this across multiple sessions,
regardless of whether sequence synchronization occurs or not. The
behavior remains consistent across sessions. As an example, if in
session2 the sequence has advanced to a value of 10 by calling
nextval, the currval in session1 still shows the previous value of 3,
which was obtained by nextval in session1. So, it seems expected to
me. But let's wait for Vignesh's comments on this.

[1]: https://www.postgresql.org/docs/current/functions-sequence.html

thanks
Shveta

#247Nisha Moond
nisha.moond412@gmail.com
In reply to: shveta malik (#246)
Re: Logical Replication of sequences

On Fri, Jun 27, 2025 at 8:50 AM shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Jun 25, 2025 at 7:42 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

Hi Vignesh,

I tested with all patches applied. I have a comment:

Let consider following case:
On publisher create a publication pub1 on all sequence. publication
has sequence s1. The curr value of s1 is 2
and On subscriber we have subscription on pub1 and sequence s1 has
value 5. Now we run:
"ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES"

Now on subscriber currval still show '5':
postgres=# select currval('s1');
currval
---------
5
(1 row)

But when we do nextval on s1 on subscriber we get '3'. Which is
correct assuming sequence is synced:
postgres=# select nextval('s1');
nextval
---------
3
(1 row)
postgres=# select currval('s1');
currval
---------
3
(1 row)

Is this behaviour expected? I feel the initial " select
currval('s1');" should have displayed '2'. Thoughts?

As per docs at [1], currval returns the value most recently obtained
by nextval for this sequence in the current session. So behaviour is
in alignment with docs. I tested this across multiple sessions,
regardless of whether sequence synchronization occurs or not. The
behavior remains consistent across sessions. As an example, if in
session2 the sequence has advanced to a value of 10 by calling
nextval, the currval in session1 still shows the previous value of 3,
which was obtained by nextval in session1. So, it seems expected to
me. But let's wait for Vignesh's comments on this.

[1]: https://www.postgresql.org/docs/current/functions-sequence.html

I also agree. It is expected behavior, as currval returns the last
nextval generated in the same session. I've also observed this across
multiple sessions.

--
Thanks,
Nisha

#248Nisha Moond
nisha.moond412@gmail.com
In reply to: shveta malik (#239)
6 attachment(s)
Re: Logical Replication of sequences

On Tue, Jun 24, 2025 at 10:37 AM shveta malik <shveta.malik@gmail.com> wrote:

Thanks for the comment, the attached v20250622 version patch has the
changes for the same.

Thanks for the patches, I am not done with review yet, but please find
the feedback so far:

Thanks for the review.

1)
+ if (!OidIsValid(seq_relid))
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("sequence \"%s.%s\" does not exist",
+    schema_name, sequence_name));

ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE might not be a correct error
code here. Shall we have ERRCODE_UNDEFINED_OBJECT? Thoughts?

+1
Updated to ERRCODE_UNDEFINED_OBJECT code in v20250630

2)
tab-complete here shows correctly:

postgres=# CREATE PUBLICATION pub6 FOR ALL
SEQUENCES TABLES

But tab-complete for these 2 commands does not show anything:

postgres=# CREATE PUBLICATION pub6 FOR ALL TABLES,
postgres=# CREATE PUBLICATION pub6 FOR ALL SEQUENCES,

We shall specify SEQUENCES/TABLES in above commands. IIUC, we do not
support any other combination like (table <name>, tables in schema
<name>) once we get ALL clause in command. So it is safe to display
tab-complete as either TABLES or SEQUENECS in above.

Tab-completion is not supported after a comma (,) in any other cases.
For example, the following commands are valid, but tab-completion does
not work after the comma:

CREATE PUBLICATION pub7 FOR TABLE t1, TABLES IN SCHEMA public;
CREATE PUBLICATION pub7 FOR TABLES IN SCHEMA public, TABLES IN SCHEMA schema2;

I feel we can keep the behavior consistent in this case too. Thoughts?

3)
postgres=# CREATE publication pub1 for sequences;
ERROR: invalid publication object list
LINE 1: CREATE publication pub1 for sequences;
^
DETAIL: One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be
specified before a standalone table or schema name.

This message is not correct as we can not have ALL SEQUENCES *before*
a standalone table or schema name. The problem is that gram.y is
taking *sequences* as a table or schema name. I noticed that it does
same with *tables* as well:

postgres=# CREATE publication pub1 for tables;
ERROR: invalid publication object list
LINE 1: CREATE publication pub1 for tables;
^
DETAIL: One of TABLE, TABLES IN SCHEMA or ALL SEQUENCES must be
specified before a standalone table or schema name.

But since gram.y here can not identify an in-between missing keyword
*all*, thus it is considering it (tables/sequenecs) as a literal/name
instead of keyword. We can revert back to old message in such a case.
I am unable to think of a good solution here.

Done, reverted to the original message.

4)
I think the error here is wrong as we are not trying to specify
multiple all-tables entries.

postgres=# CREATE PUBLICATION pub6 for all tables, tables in schema public;
ERROR: invalid publication object list
LINE 1: CREATE PUBLICATION pub6 for all tables, tables in schema pub...

^
DETAIL: ALL TABLES can be specified only once.

The parser cannot distinguish between "TABLES" and "TABLES IN SCHEMA"
while building all_object_list for "FOR ALL ...".
To address this, the duplicate check has been moved to
CreatePublication, and a syntax error is now raised for cases
mentioned above.

5)
The log messages still has some scope of improvement:

2025-06-24 08:52:17.988 IST [110359] LOG: logical replication
sequence synchronization worker for subscription "sub1" has started
2025-06-24 08:52:18.029 IST [110359] LOG: logical replication
sequence synchronization for subscription "sub1" - total
unsynchronized: 3
2025-06-24 08:52:18.090 IST [110359] LOG: logical replication
sequence synchronization for subscription "sub1" - batch #1 = 3
attempted, 0 succeeded, 2 mismatched, 1 missing
2025-06-24 08:52:18.091 IST [110359] ERROR: logical replication
sequence synchronization failed for subscription "sub1"
2025-06-24 08:52:18.091 IST [110359] DETAIL: Sequences
("public.myseq100") are missing on the publisher. Additionally,
parameters differ for the remote and local sequences
("public.myseq101", "public.myseq102").
2025-06-24 08:52:18.091 IST [110359] HINT: Use ALTER SUBSCRIPTION ...
REFRESH PUBLICATION or use ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES. Alter or re-create local sequences to have the same
parameters as the remote sequences

a)
Sequences ("public.myseq100") are missing on the publisher.
Additionally, parameters differ for the remote and local sequences
("public.myseq101", "public.myseq102").

Shall we change this to:
missing sequence(s) on publisher: ("public.myseq100"); mismatched
sequences(s) on subscriber: ("public.myseq101", "public.myseq102")

It will then be similar to the previous log pattern ( 3 attempted, 0
succeeded etc) instead of being more verbal. Thoughts?

b)
Hints are a little odd. First line of hint is just saying to use
'ALTER SUBSCRIPTION' without giving any purpose. While the second line
of hint is giving the purpose of alter-sequences.

Shall we have?
For missing sequences, use ALTER SUBSCRIPTION with either REFRESH
PUBLICATION or REFRESH PUBLICATION SEQUENCES
For mismatched sequences, alter or re-create local sequences to have
matching parameters as publishers.

Thoughts?

Updated the messages as suggested.

6)
postgres=# create publication pub1 for table tab1, all sequences;
ERROR: syntax error at or near "all"
LINE 1: create publication pub1 for table tab1, all sequences;

We can mention in commit msg that this combination is also not
supported or what all combinations are supported. Currently it is not
clear f

Done.
~~~~

Please find the attached v20250630 patch set addressing above comments
and other comments in [1]/messages/by-id/CABdArM7h1qQLUb_S7i6MrLPEtHXnX+Y2fPQaSnqhCdHktcQk5Q@mail.gmail.com,[2]/messages/by-id/CANhcyEVbdambw=aVVuW0RrhQ7Lkqad=CdrvVA8FP6Xb+kP_Qzg@mail.gmail.com,[3]/messages/by-id/CABdArM5mwL8WtGWdDdYT98ddYaB=3N6cfPBncvnh682X1GfbVQ@mail.gmail.com and [4]/messages/by-id/CANhcyEWKhHWFzpdAF6czbwq76NRDNCecDqQNtN6Bomn26mqHFw@mail.gmail.com.

[1]: /messages/by-id/CABdArM7h1qQLUb_S7i6MrLPEtHXnX+Y2fPQaSnqhCdHktcQk5Q@mail.gmail.com
[2]: /messages/by-id/CANhcyEVbdambw=aVVuW0RrhQ7Lkqad=CdrvVA8FP6Xb+kP_Qzg@mail.gmail.com
[3]: /messages/by-id/CABdArM5mwL8WtGWdDdYT98ddYaB=3N6cfPBncvnh682X1GfbVQ@mail.gmail.com
[4]: /messages/by-id/CANhcyEWKhHWFzpdAF6czbwq76NRDNCecDqQNtN6Bomn26mqHFw@mail.gmail.com

--
Thanks,
Nisha

Attachments:

v20250630-0001-Introduce-pg_sequence_state-function-for-e.patchapplication/octet-stream; name=v20250630-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 43d330cb5a0164076c8ef9f5d172d7f644527598 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:12:45 +0530
Subject: [PATCH v20250630 1/6] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
In subsequent patches, this function will be used to fetch the
sequence states from the publisher in order to synchronize them on
the subscriber.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 298791858be..b33ae0447b0 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..d051adf4931 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* Open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	sequence_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index fb4f7f50350..b0e60dfa3ce 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.34.1

v20250630-0002-Introduce-ALL-SEQUENCES-support-for.patchapplication/octet-stream; name=v20250630-0002-Introduce-ALL-SEQUENCES-support-for.patchDownload
From 1bc85ca91bac5ce94a2ec4e26a84330d0cc375cd Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:14:18 +0530
Subject: [PATCH v20250630 2/6] Introduce "ALL SEQUENCES" support for 
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.
---
 src/backend/catalog/pg_publication.c      |   4 +-
 src/backend/commands/publicationcmds.c    | 134 ++++--
 src/backend/parser/gram.y                 |  38 +-
 src/bin/pg_dump/pg_dump.c                 |  14 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  19 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 558 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 702 insertions(+), 350 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..dc3f9ed3fbf 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1084,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..26ab7ea9bed 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -72,6 +72,8 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 								  AlterPublicationStmt *stmt);
 static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
 static char defGetGeneratedColsOption(DefElem *def);
+static void process_all_objtype_list(List *all_objects_list, bool *all_tables,
+									 bool *all_sequences);
 
 
 static void
@@ -820,6 +822,41 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 	}
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pubobjtype has been specified more than once.
+ */
+static void
+process_all_objtype_list(List *all_objects_list, bool *all_tables,
+						 bool *all_sequences)
+{
+	Assert(all_objects_list);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Create new publication.
  */
@@ -841,6 +878,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	AclResult	aclresult;
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
+	bool		all_tables = false;
+	bool		all_sequences = false;
 
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
@@ -848,11 +887,22 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (stmt->all_pub)
+		process_all_objtype_list(stmt->pubobjects,
+								 &all_tables,
+								 &all_sequences);
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -884,8 +934,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
-	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballtables - 1] = all_tables;
+	values[Anum_pg_publication_puballsequences - 1] = all_sequences;
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -913,12 +963,12 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (all_tables)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1440,6 +1490,8 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
+
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1504,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -1902,6 +1961,13 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
 					errcode(ERRCODE_SYNTAX_ERROR),
 					errmsg("column list must not be specified in ALTER PUBLICATION ... DROP"));
 
+		if (RelationGetForm(rel)->relkind == RELKIND_SEQUENCE)
+			ereport(ERROR,
+					errcode(ERRCODE_UNDEFINED_OBJECT),
+					errmsg("relation \"%s\" is not part of the publication",
+						   RelationGetRelationName(rel)),
+					errdetail_relkind_not_supported(RelationGetForm(rel)->relkind));
+
 		prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
 							   ObjectIdGetDatum(relid),
 							   ObjectIdGetDatum(pubid));
@@ -2019,19 +2085,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 50f53159d58..b6dbb4fe7fc 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -259,6 +259,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +585,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10612,7 +10614,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10632,13 +10639,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					n->pubobjects = (List *) $6;
+					n->all_pub = true;
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10750,6 +10758,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 1937997ea67..eda52fc813a 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4392,6 +4392,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4422,9 +4423,9 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, p.puballsequences ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, false AS puballsequences ", PUBLISH_GENCOLS_NONE);
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4440,6 +4441,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4460,6 +4462,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4511,8 +4515,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 39eef1d6617..b1a6a08f52b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -662,6 +662,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 2485d8f360e..b3bfbdc82bc 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3302,6 +3302,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index dd25d2fe7b8..022a92cd0a8 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 180000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
 
-		free(footers[0]);
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
+
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 180000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 180000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 908eef97c6e..31030fc212c 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3530,12 +3530,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 48c7d1a8615..843fe784d64 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index ba12678d1cb..57d8c129796 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4258,13 +4258,30 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		all_pub;		/* Special publication for all tables,
+								 * sequecnes */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3a2eacd793f..749c316a107 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,92 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +320,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +338,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +370,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +386,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +405,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +416,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +452,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +465,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +583,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +878,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1071,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1282,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1325,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1340,10 +1408,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1421,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1450,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1476,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1547,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1558,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1579,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1591,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1603,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1614,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1636,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1667,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1679,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1761,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1782,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1917,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c9e309190df..1cf013e72f6 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 32d6e718adc..933371f2680 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2343,6 +2343,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.34.1

v20250630-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250630-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 5e6a0bf4a267f025b0f8f7c97566452ea476c8af Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250630 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..e8bbce141b7
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index c90f23ee5b0..dfae43dd806 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -664,37 +612,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1331,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1572,77 +1489,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1728,7 +1574,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1746,7 +1592,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index fd11805a44c..5f10f28cacb 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1023,7 +1023,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1145,7 +1145,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1201,7 +1201,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1267,7 +1267,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1402,7 +1402,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2244,7 +2244,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3721,7 +3721,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4798,7 +4798,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..5394fbc4afe 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 933371f2680..6ba83e95fd7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2905,7 +2905,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.34.1

v20250630-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250630-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From b33b3562ce0945f2c794cf745f95ac854543faf0 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 9 Jun 2025 20:18:54 +0530
Subject: [PATCH v20250630 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 348 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/pg_dump/common.c                    |   4 +-
 src/bin/pg_dump/pg_dump.c                   |   8 +-
 src/bin/pg_dump/pg_dump.h                   |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 17 files changed, 460 insertions(+), 105 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index dc3f9ed3fbf..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1062,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1334,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..ebd5605afe3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 08f780a2e63..9853fd50b35 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4ff246cd943..09ae7b80722 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating a
+	 * replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +713,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +727,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +742,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +768,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +797,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,12 +836,50 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
@@ -839,6 +897,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,16 +922,23 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -880,9 +951,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
 		 * Rels that we want to remove from subscription and drop any slots
@@ -891,11 +963,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -904,12 +977,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,14 +994,15 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
@@ -937,11 +1012,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1058,51 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					sub_remove_rels[remove_rel_len].relid = relid;
+					sub_remove_rels[remove_rel_len++].state = state;
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1393,8 +1498,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1513,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1554,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1573,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1597,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1610,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1773,7 +1894,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2208,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2248,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2433,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index b6dbb4fe7fc..04d6408580a 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10889,11 +10889,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index e8bbce141b7..db15051f47b 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ SyncFetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index eda52fc813a..5c23d3db56e 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5139,12 +5139,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5198,7 +5198,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b1a6a08f52b..1ceb25bdcde 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -819,6 +819,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 31030fc212c..29a18913512 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2288,7 +2288,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b0e60dfa3ce..52a8a2a9672 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12261,6 +12261,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 843fe784d64..283c0b11195 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 57d8c129796..3c64e2d17a7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4325,7 +4325,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..9623240915c 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2171,6 +2179,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2181,7 +2190,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.34.1

v20250630-0005-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250630-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 35e8a178c9fcbcd4c7200a570c94be8ec7d8750e Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:06:04 +0530
Subject: [PATCH v20250630 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG18 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG18 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 675 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  70 +-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 228 ++++++
 src/tools/pgindent/typedefs.list              |   1 +
 23 files changed, 1206 insertions(+), 109 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 9853fd50b35..dde8a71b84d 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1396,6 +1396,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index d051adf4931..4d03704f39b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 09ae7b80722..09b2d9ac104 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1069,7 +1069,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 					sub_remove_rels[remove_rel_len].relid = relid;
 					sub_remove_rels[remove_rel_len++].state = state;
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -1871,7 +1871,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 4aed0dfcebb..161c4901245 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -239,19 +239,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -261,7 +260,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -321,6 +320,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -406,7 +406,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -492,8 +493,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -616,13 +625,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -689,7 +698,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -819,6 +828,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -867,7 +907,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1182,7 +1222,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1329,7 +1369,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1369,6 +1409,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..b66f03412e2
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,675 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequences(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequences(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *remotesequences, Oid subid)
+{
+	int			total_seqs = list_length(remotesequences);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(remotesequences, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *sequence_info = lfirst(list_nth_cell(remotesequences, search_pos));
+
+				if (!strcmp(sequence_info->nspname, nspname) &&
+					!strcmp(sequence_info->seqname, seqname))
+				{
+					seqinfo = sequence_info;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(remotesequences, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *sequences;
+	List	   *sequences_not_synced = NIL;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfoData app_name;
+	List	   *remotesequences = NIL;
+	char	   *nspname;
+	char	   *seqname;
+	LogicalRepSequenceInfo *seq_info;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	sequences = GetSubscriptionRelations(subid, false, true, true);
+
+	/* Allocate the tracking info in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+	foreach_ptr(SubscriptionRelState, seq_state, sequences)
+	{
+		SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+		memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+		sequences_not_synced = lappend(sequences_not_synced, rstate);
+	}
+	MemoryContextSwitchTo(oldctx);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(sequences_not_synced);
+
+	StartTransactionCommand();
+	foreach_ptr(SubscriptionRelState, seqinfo, sequences_not_synced)
+	{
+		Relation	sequence_rel;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(seqinfo->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = seqinfo->relid;
+		seq_info->remote_seq_fetched = false;
+		remotesequences = lappend(remotesequences, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	copy_sequences(LogRepWorkerWalRcvConn, remotesequences, subid);
+
+	list_free_deep(sequences_not_synced);
+	list_free_deep(remotesequences);
+
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index db15051f47b..5f5770a3908 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,12 +174,14 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index dfae43dd806..d51b6326359 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1248,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1521,7 +1521,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1566,7 +1567,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1574,7 +1575,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1588,23 +1589,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 5f10f28cacb..f1579fa9fa8 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -482,6 +482,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1022,7 +1027,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1144,7 +1152,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1200,7 +1211,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1266,7 +1280,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1401,7 +1418,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2243,7 +2263,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3720,7 +3743,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -4529,7 +4555,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4649,8 +4676,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4729,6 +4756,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4748,14 +4779,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4800,6 +4834,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4826,6 +4863,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4838,9 +4879,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 511dc32d519..4be4c5d1b89 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 52a8a2a9672..b1fe4666a0d 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5696,9 +5696,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 378f2f2c2ba..03d4df572f4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 5394fbc4afe..7590df29910 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..8b0e3bc76d3
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,228 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequences\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6ba83e95fd7..507093be502 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.34.1

v20250630-0006-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250630-0006-Documentation-for-sequence-synchronization.patchDownload
From 3ec7551e3bb374f5dcfadf34d21ca96721a5a2c3 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250630 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 244 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  84 ++++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 8 files changed, 455 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index fa86c569dc4..7d7571a995c 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8149,16 +8149,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8192,7 +8195,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8201,12 +8204,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 59a0874528a..fbba9c65fa6 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5186,9 +5186,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5329,8 +5329,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5353,10 +5353,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index c32e6bc000d..6f67235ec9d 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1711,6 +1715,204 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged listing all differing
+     sequences before the process exits. The apply worker detects the failure
+     and repeatedly respawns the sequence synchronization worker to continue
+     the synchronization process until all differences are resolved. See also
+     <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2040,16 +2242,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2367,8 +2572,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2381,8 +2586,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..8fa27144da8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..8c794d9b8d0 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,20 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | ALL <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +126,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +157,31 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -188,6 +209,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           for logical replication does not take this parameter into account when
           copying existing table data.
          </para>
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -224,6 +248,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           See <xref linkend="logical-replication-gencols"/> for more details about
           logical replication of generated columns.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -259,6 +287,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           If this is enabled, <literal>TRUNCATE</literal> operations performed
           directly on partitions are not replicated.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
       </variablelist></para>
@@ -279,10 +311,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +330,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +482,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 986ae1f543d..e02cc1e7c5a 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.34.1

#249Nisha Moond
nisha.moond412@gmail.com
In reply to: shveta malik (#243)
Re: Logical Replication of sequences

On Wed, Jun 25, 2025 at 9:26 AM shveta malik <shveta.malik@gmail.com> wrote:

On Tue, Jun 24, 2025 at 6:44 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

1. Initially, I have created a publication on sequence s1.
postgres=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
CREATE PUBLICATION
postgres=# ALTER PUBLICATION pub1 SET TABLE t1;
ALTER PUBLICATION
postgres=# \d s1
Sequence "public.s1"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
Publications:
"pub1"
postgres=# select * from pg_publication_rel;
oid | prpubid | prrelid | prqual | prattrs
-------+---------+---------+--------+---------
16415 | 16414 | 16388 | |
(1 row)

Here, we can set the publication to TABLE or TABLES FOR SCHEMA. Should
this be allowed?
If a publication is created on FOR ALL TABLES, such an operation is not allowed.

Good catch. IMO, this should not be allowed as currently we strictly
support either ALL SEQUENCES or ALL SEQUENCES with ALL TABLES alone.

+1

A similar situation existed for the ALTER PUBLICATION ... ADD ...
command as reported in [1]/messages/by-id/CABdArM7h1qQLUb_S7i6MrLPEtHXnX+Y2fPQaSnqhCdHktcQk5Q@mail.gmail.com (point #3).
This has been addressed in v20250630, where similar to ALL TABLES, ADD
or SET operations are now disallowed for ALL SEQUENCES publications.

[1]: /messages/by-id/CABdArM7h1qQLUb_S7i6MrLPEtHXnX+Y2fPQaSnqhCdHktcQk5Q@mail.gmail.com

--
Thanks,
Nisha

#250Nisha Moond
nisha.moond412@gmail.com
In reply to: Shlok Kyal (#244)
Re: Logical Replication of sequences

On Wed, Jun 25, 2025 at 3:10 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

4. Since we are not adding sequences in the list 'sub_remove_rels',
should we only palloc for (the count of no. of tables)? Is it worth
the effort?
/*
* Rels that we want to remove from subscription and drop any slots
* and origins corresponding to them.
*/
sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));

The sub_remove_rels array allocates memory for all relations in the
subscription, even though it only uses entries for those that are
actually removed.
While this may result in unnecessary allocation, even when only tables
are involved. OTOH, as it’s a short-lived variable, pre-allocating can
help with performance.
This requires further analysis, I plan to handle this in the next version.

--
Thanks,
Nisha

#251shveta malik
shveta.malik@gmail.com
In reply to: Nisha Moond (#248)
Re: Logical Replication of sequences

On Mon, Jun 30, 2025 at 3:21 PM Nisha Moond <nisha.moond412@gmail.com> wrote:

Tab-completion is not supported after a comma (,) in any other cases.
For example, the following commands are valid, but tab-completion does
not work after the comma:

CREATE PUBLICATION pub7 FOR TABLE t1, TABLES IN SCHEMA public;
CREATE PUBLICATION pub7 FOR TABLES IN SCHEMA public, TABLES IN SCHEMA schema2;

I feel we can keep the behavior consistent in this case too. Thoughts?

Yes, let's keep the behaviour same. No need to make a change.

thanks
Shveta

#252shveta malik
shveta.malik@gmail.com
In reply to: Nisha Moond (#248)
Re: Logical Replication of sequences

On Mon, Jun 30, 2025 at 3:21 PM Nisha Moond <nisha.moond412@gmail.com> wrote:

Please find the attached v20250630 patch set addressing above comments
and other comments in [1],[2],[3] and [4].

Thanks for the patches. I am still in process of reviewing it but
please find few comments:

1)
+ if (pset.sversion >= 180000)
+ appendPQExpBuffer(&buf,
+   ",\n  puballsequences AS \"%s\"",
+   gettext_noop("All sequences"));

The server version check throughout the patch can be modified to 19000
as a new branch is created now.

2)
+ bool all_pub; /* Special publication for all tables,
+ * sequecnes */

a) Typo: sequecnes --> sequences
b) It is not clear from the comment that when will it be true? Will it
be set when either of all-tables or all-sequences is given or does it
need both?

3)
postgres=# create publication pub1 for all sequences WITH ( PUBLISH='delete');
CREATE PUBLICATION
postgres=# create publication pub2 for all tables, sequences WITH
(PUBLISH='update');
CREATE PUBLICATION

For the first command, 'WITH ( publication_parameter..' is useless.
For the second command, it is applicable only for 'all tables'.

a) I am not sure if we even allow WITH in the first command?
b) In the second command, even if we allow it, there should be some
sort of NOTICE informing that it is applicable only to 'TABLES'.

Thoughts?

Also we allowed altering publication_parameter for all-sequences publication:

postgres=# alter publication pub1 set (publish='insert,update');
ALTER PUBLICATION

c) Should this be restricted as well? Thoughts?

thanks
Shveta

#253shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#252)
Re: Logical Replication of sequences

Few more concerns:

4)
In UpdateSubscriptionRelState():
if (!HeapTupleIsValid(tup))
elog(ERROR, "subscription table %u in subscription %u
does not exist",
relid, subid);

table-->relation as now it can be hit for both sequence and table.

5)
In LogicalRepSyncSequences, why are we allocating it in a permanent
memory context?

+ /* Allocate the tracking info in a permanent memory context. */
+ oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+ foreach_ptr(SubscriptionRelState, seq_state, sequences)
+ {
+ SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+ memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+ sequences_not_synced = lappend(sequences_not_synced, rstate);
+ }
+ MemoryContextSwitchTo(oldctx);

Same for 'seq_info' allocation.

6)
In LogicalRepSyncSequences, can you please help me understand this:
Why have we first created new list 'sequences_not_synced' from
'sequences' list, both have same elements of type
SubscriptionRelState; and then used that newly created
'sequences_not_synced list' to create remotesequences list having
element type LogicalRepSequenceInfo. Why didn't we use 'sequences'
list directly to create 'remotesequences' list?

7)
In this function we have 2 variables: seqinfo and seq_info. We are
using seqinfo to create seq_info. The names are very confusing.
Difficult to differentiate between the two.

In copy_sequences() too we have similarly named variables: seqinfo and
sequence_info.

Can we choose different names here?

8)
Why have we named argument of copy_sequences() as remotesequences?
IIUC, these are the sequences fetched from pg_subscription_rel and its
elements even maintain a field 'localrelid'. The name is thus
confusing as it has local data. Perhaps we can name it as
candidate_sequences or init_sequences (i.e. sequences in init state)
or any other suitable name.

thanks
Shveta

#254vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#252)
6 attachment(s)
Re: Logical Replication of sequences

On Tue, 1 Jul 2025 at 15:20, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jun 30, 2025 at 3:21 PM Nisha Moond <nisha.moond412@gmail.com> wrote:

Please find the attached v20250630 patch set addressing above comments
and other comments in [1],[2],[3] and [4].

Thanks for the patches. I am still in process of reviewing it but
please find few comments:

1)
+ if (pset.sversion >= 180000)
+ appendPQExpBuffer(&buf,
+   ",\n  puballsequences AS \"%s\"",
+   gettext_noop("All sequences"));

The server version check throughout the patch can be modified to 19000
as a new branch is created now.

Modified

2)
+ bool all_pub; /* Special publication for all tables,
+ * sequecnes */

a) Typo: sequecnes --> sequences

Modified

b) It is not clear from the comment that when will it be true? Will it
be set when either of all-tables or all-sequences is given or does it
need both?

Updated the comments

3)
postgres=# create publication pub1 for all sequences WITH ( PUBLISH='delete');
CREATE PUBLICATION
postgres=# create publication pub2 for all tables, sequences WITH
(PUBLISH='update');
CREATE PUBLICATION

For the first command, 'WITH ( publication_parameter..' is useless.
For the second command, it is applicable only for 'all tables'.

a) I am not sure if we even allow WITH in the first command?
b) In the second command, even if we allow it, there should be some
sort of NOTICE informing that it is applicable only to 'TABLES'.

Thoughts?

Also we allowed altering publication_parameter for all-sequences publication:

postgres=# alter publication pub1 set (publish='insert,update');
ALTER PUBLICATION

c) Should this be restricted as well? Thoughts?

We have documented that the parameter of with clause is not applicable
for sequences. I feel that all the above statements are ok with the
documentation mentioned.

Regarding the comments from [1]/messages/by-id/CAJpy0uD+7UtDs9=Cx03BAckMPDdW7C6ifGF_Lc54B8iH6RNXWQ@mail.gmail.com.

5)
In LogicalRepSyncSequences, why are we allocating it in a permanent
memory context?

+ /* Allocate the tracking info in a permanent memory context. */
+ oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+ foreach_ptr(SubscriptionRelState, seq_state, sequences)
+ {
+ SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+ memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+ sequences_not_synced = lappend(sequences_not_synced, rstate);
+ }
+ MemoryContextSwitchTo(oldctx);

Same for 'seq_info' allocation.

When we are in between a transaction we will be using
TopTransactionContext. We can palloc() in TopTransactionContext and
safely use that memory throughout the transaction. But we cannot
cannot access memory allocated in TopTransactionContext after
CommitTransaction() finishes, because TopTransactionContext is
explicitly reset (or deleted) at the end of the transaction.
This is the reason we have to use CacheMemoryContext here.

The rest of the comments are fixed.

Also one of the pending comment from [2]/messages/by-id/CANhcyEWKhHWFzpdAF6czbwq76NRDNCecDqQNtN6Bomn26mqHFw@mail.gmail.com is fixed.

4. Since we are not adding sequences in the list 'sub_remove_rels',
should we only palloc for (the count of no. of tables)? Is it worth
the effort?
/*
* Rels that we want to remove from subscription and drop any slots
* and origins corresponding to them.
*/
sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));

The attached v20250704 version patch has the changes for the same.
[1]: /messages/by-id/CAJpy0uD+7UtDs9=Cx03BAckMPDdW7C6ifGF_Lc54B8iH6RNXWQ@mail.gmail.com
[2]: /messages/by-id/CANhcyEWKhHWFzpdAF6czbwq76NRDNCecDqQNtN6Bomn26mqHFw@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250704-0001-Introduce-pg_sequence_state-function-for-e.patchapplication/octet-stream; name=v20250704-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 97c4e94bc69204e1ce59c386045b2f3dbaf738cd Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:12:45 +0530
Subject: [PATCH v20250704 1/6] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
In subsequent patches, this function will be used to fetch the
sequence states from the publisher in order to synchronize them on
the subscriber.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 810b2b50f0d..560b4785c7d 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..d051adf4931 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* Open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	sequence_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d4650947c63..74c34ca4db8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250704-0005-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250704-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 5edf62e66ac9b5852ad5c2d43c77daeae21f8eab Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 4 Jul 2025 15:15:39 +0530
Subject: [PATCH v20250704 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  71 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 667 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  70 +-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  30 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 228 ++++++
 src/tools/pgindent/typedefs.list              |   1 +
 24 files changed, 1199 insertions(+), 110 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index ebd5605afe3..b8f415cd50d 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -337,7 +337,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index cf555526dbb..40c82420118 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1396,6 +1396,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index d051adf4931..4d03704f39b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index f66f696f658..eaadadb0f0d 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1066,7 +1066,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -1870,7 +1870,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 4aed0dfcebb..161c4901245 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -239,19 +239,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -261,7 +260,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -321,6 +320,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -406,7 +406,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -492,8 +493,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -616,13 +625,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -689,7 +698,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -819,6 +828,37 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Set the sequencesync worker failure time.
+ */
+void
+logicalrep_seqsyncworker_set_failuretime()
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	logicalrep_seqsyncworker_set_failuretime();
+}
+
 /*
  * Cleanup function.
  *
@@ -867,7 +907,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1182,7 +1222,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1329,7 +1369,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1369,6 +1409,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..29461d5b15f
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,667 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+
+		if (rstate->state != SUBREL_STATE_INIT)
+			continue;
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										 MyLogicalRepWorker->dbid,
+										 MySubscription->oid,
+										 MySubscription->name,
+										 MyLogicalRepWorker->userid,
+										 InvalidOid,
+										 DSM_HANDLE_INVALID);
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequences(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequences(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *subsequences;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfoData app_name;
+	List	   *sequences_to_copy = NIL;
+
+	StartTransactionCommand();
+
+	/* Allocate the sequences information in a permanent memory context. */
+	oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+	/* Get the sequences that should be synchronized. */
+	subsequences = GetSubscriptionRelations(subid, false, true, true);
+
+	MemoryContextSwitchTo(oldctx);
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	seq_count = list_length(subsequences);
+
+	StartTransactionCommand();
+	foreach_ptr(SubscriptionRelState, subseq, subsequences)
+	{
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(subseq->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subseq->relid;
+		seq_info->remote_seq_fetched = false;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	list_free_deep(sequences_to_copy);
+	list_free_deep(subsequences);
+
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index db15051f47b..5f5770a3908 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			SyncFetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-SyncFetchRelationStates(bool *started_tx)
+SyncFetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,12 +174,14 @@ SyncFetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +194,11 @@ SyncFetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +223,11 @@ SyncFetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index dfae43dd806..d51b6326359 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	SyncFetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1248,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1521,7 +1521,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1566,7 +1567,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1574,7 +1575,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1588,23 +1589,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = SyncFetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = SyncFetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 5f10f28cacb..f1579fa9fa8 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -482,6 +482,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1022,7 +1027,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1144,7 +1152,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1200,7 +1211,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1266,7 +1280,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1401,7 +1418,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2243,7 +2263,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3720,7 +3743,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -4529,7 +4555,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4649,8 +4676,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4729,6 +4756,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4748,14 +4779,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4800,6 +4834,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  SyncInvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4826,6 +4863,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4838,9 +4879,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 511dc32d519..4be4c5d1b89 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d38780e2678..b919b1fd653 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5696,9 +5696,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 378f2f2c2ba..03d4df572f4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 5394fbc4afe..7590df29910 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_set_failuretime(void);
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,12 +272,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
 										 uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool SyncFetchRelationStates(bool *started_tx);
+extern bool SyncFetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -333,15 +343,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..8b0e3bc76d3
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,228 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequences\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 31e217b769a..b3e62fa4dad 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1630,6 +1630,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250704-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250704-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From ade980b1eb7d0aa30ae6c6473ba8bc6ddb740ff2 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 9 Jun 2025 20:18:54 +0530
Subject: [PATCH v20250704 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 373 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/pg_dump/common.c                    |   4 +-
 src/bin/pg_dump/pg_dump.c                   |   8 +-
 src/bin/pg_dump/pg_dump.h                   |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 17 files changed, 472 insertions(+), 118 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index dc3f9ed3fbf..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1062,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1334,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..ebd5605afe3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index e5dbbe61b81..cf555526dbb 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4ff246cd943..f66f696f658 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating a
+	 * replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +713,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +727,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +742,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +768,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +797,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,18 +836,55 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -835,10 +892,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,16 +922,23 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -880,22 +951,18 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -904,12 +971,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,28 +988,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1051,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1006,10 +1108,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1023,11 +1125,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1393,8 +1497,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1512,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1553,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1572,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1596,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1609,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1773,7 +1893,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2247,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2432,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9ffe2c38f83..3d5ed0e2f08 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10947,11 +10947,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index e8bbce141b7..db15051f47b 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ SyncFetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9a4f86b8457..64ce57f57e8 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5144,12 +5144,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5203,7 +5203,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b1a6a08f52b..1ceb25bdcde 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -819,6 +819,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index a6f1602d54c..e3989e93c47 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2296,7 +2296,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 74c34ca4db8..d38780e2678 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12278,6 +12278,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..1af265aa174 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86f95b55419..32841da9dde 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4357,7 +4357,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..9623240915c 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2171,6 +2179,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2181,7 +2190,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 1443e1d9292..66dcd71eefa 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250704-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250704-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From ed0ae5e6205e521be3fa87a923db8d69f245bcc9 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250704 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  13 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 232 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..d2b663267ad 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..e8bbce141b7
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+SyncInvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+SyncFetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index c90f23ee5b0..dfae43dd806 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	SyncFetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -664,37 +612,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1331,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1572,77 +1489,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1728,7 +1574,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1746,7 +1592,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = SyncFetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index fd11805a44c..5f10f28cacb 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1023,7 +1023,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1145,7 +1145,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1201,7 +1201,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1267,7 +1267,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1402,7 +1402,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2244,7 +2244,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3721,7 +3721,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4798,7 +4798,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  SyncInvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..5394fbc4afe 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,14 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void SyncInvalidateRelationStates(Datum arg, int cacheid,
+										 uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool SyncFetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d561c72321b..31e217b769a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2910,7 +2910,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250704-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250704-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 408edfa1f41f231ee1c506cd2ed7d11cea551473 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:14:18 +0530
Subject: [PATCH v20250704 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.
---
 src/backend/catalog/pg_publication.c      |   4 +-
 src/backend/commands/publicationcmds.c    | 134 ++++--
 src/backend/parser/gram.y                 |  38 +-
 src/bin/pg_dump/pg_dump.c                 |  19 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  19 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 558 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 707 insertions(+), 350 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..dc3f9ed3fbf 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1084,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..572ec057ad8 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -72,6 +72,8 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 								  AlterPublicationStmt *stmt);
 static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
 static char defGetGeneratedColsOption(DefElem *def);
+static void process_all_objtype_list(List *all_objects_list, bool *all_tables,
+									 bool *all_sequences);
 
 
 static void
@@ -820,6 +822,41 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 	}
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pubobjtype has been specified more than once.
+ */
+static void
+process_all_objtype_list(List *all_objects_list, bool *all_tables,
+						 bool *all_sequences)
+{
+	Assert(all_objects_list);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Create new publication.
  */
@@ -841,6 +878,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	AclResult	aclresult;
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
+	bool		all_tables = false;
+	bool		all_sequences = false;
 
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
@@ -848,11 +887,22 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (stmt->puballobj)
+		process_all_objtype_list(stmt->pubobjects,
+								 &all_tables,
+								 &all_sequences);
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -884,8 +934,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
-	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballtables - 1] = all_tables;
+	values[Anum_pg_publication_puballsequences - 1] = all_sequences;
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -913,12 +963,12 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (all_tables)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1440,6 +1490,8 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
+
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1504,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -1902,6 +1961,13 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
 					errcode(ERRCODE_SYNTAX_ERROR),
 					errmsg("column list must not be specified in ALTER PUBLICATION ... DROP"));
 
+		if (RelationGetForm(rel)->relkind == RELKIND_SEQUENCE)
+			ereport(ERROR,
+					errcode(ERRCODE_UNDEFINED_OBJECT),
+					errmsg("relation \"%s\" is not part of the publication",
+						   RelationGetRelationName(rel)),
+					errdetail_relkind_not_supported(RelationGetForm(rel)->relkind));
+
 		prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
 							   ObjectIdGetDatum(relid),
 							   ObjectIdGetDatum(pubid));
@@ -2019,19 +2085,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 70a0d832a11..9ffe2c38f83 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -259,6 +259,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +585,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10670,7 +10672,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10690,13 +10697,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					n->pubobjects = (List *) $6;
+					n->puballobj = true;
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10808,6 +10816,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 1937997ea67..9a4f86b8457 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4392,6 +4392,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4422,9 +4423,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4440,6 +4446,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4460,6 +4467,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4511,8 +4520,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 39eef1d6617..b1a6a08f52b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -662,6 +662,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 2485d8f360e..b3bfbdc82bc 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3302,6 +3302,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index dd25d2fe7b8..3035b24f26f 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 53e7d35fe98..a6f1602d54c 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3545,12 +3545,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 28e2e8dc0fd..86f95b55419 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4290,13 +4290,30 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		puballobj;		/* True if the publication is for all tables,
+								 * all sequences, or both */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3a2eacd793f..749c316a107 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,92 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +320,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +338,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +370,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +386,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +405,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +416,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +452,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +465,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +583,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +878,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1071,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1282,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1325,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1340,10 +1408,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1421,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1450,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1476,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1547,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1558,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1579,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1591,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1603,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1614,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1636,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1667,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1679,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1761,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1782,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1917,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c9e309190df..1cf013e72f6 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 114bdafafdf..d561c72321b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2348,6 +2348,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250704-0006-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250704-0006-Documentation-for-sequence-synchronization.patchDownload
From 5b2f5a4a2b626d417ed57cfda485dcac25ca0277 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250704 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 244 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  84 ++++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 8 files changed, 455 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 4f9192316e0..e2e71d97a37 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8149,16 +8149,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8192,7 +8195,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8201,12 +8204,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 59a0874528a..fbba9c65fa6 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5186,9 +5186,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5329,8 +5329,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5353,10 +5353,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index c32e6bc000d..6f67235ec9d 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1711,6 +1715,204 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged listing all differing
+     sequences before the process exits. The apply worker detects the failure
+     and repeatedly respawns the sequence synchronization worker to continue
+     the synchronization process until all differences are resolved. See also
+     <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2040,16 +2242,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2367,8 +2572,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2381,8 +2586,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..8fa27144da8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..8c794d9b8d0 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,20 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | ALL <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +126,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +157,31 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -188,6 +209,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           for logical replication does not take this parameter into account when
           copying existing table data.
          </para>
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -224,6 +248,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           See <xref linkend="logical-replication-gencols"/> for more details about
           logical replication of generated columns.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -259,6 +287,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           If this is enabled, <literal>TRUNCATE</literal> operations performed
           directly on partitions are not replicated.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
       </variablelist></para>
@@ -279,10 +311,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +330,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +482,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index e1ac544ee40..5c1f74eea81 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#255shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#254)
Re: Logical Replication of sequences

On Fri, Jul 4, 2025 at 3:53 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 1 Jul 2025 at 15:20, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jun 30, 2025 at 3:21 PM Nisha Moond <nisha.moond412@gmail.com> wrote:

Please find the attached v20250630 patch set addressing above comments
and other comments in [1],[2],[3] and [4].

Thanks for the patches. I am still in process of reviewing it but
please find few comments:

1)
+ if (pset.sversion >= 180000)
+ appendPQExpBuffer(&buf,
+   ",\n  puballsequences AS \"%s\"",
+   gettext_noop("All sequences"));

The server version check throughout the patch can be modified to 19000
as a new branch is created now.

Modified

2)
+ bool all_pub; /* Special publication for all tables,
+ * sequecnes */

a) Typo: sequecnes --> sequences

Modified

b) It is not clear from the comment that when will it be true? Will it
be set when either of all-tables or all-sequences is given or does it
need both?

Updated the comments

3)
postgres=# create publication pub1 for all sequences WITH ( PUBLISH='delete');
CREATE PUBLICATION
postgres=# create publication pub2 for all tables, sequences WITH
(PUBLISH='update');
CREATE PUBLICATION

For the first command, 'WITH ( publication_parameter..' is useless.
For the second command, it is applicable only for 'all tables'.

a) I am not sure if we even allow WITH in the first command?
b) In the second command, even if we allow it, there should be some
sort of NOTICE informing that it is applicable only to 'TABLES'.

Thoughts?

Also we allowed altering publication_parameter for all-sequences publication:

postgres=# alter publication pub1 set (publish='insert,update');
ALTER PUBLICATION

c) Should this be restricted as well? Thoughts?

We have documented that the parameter of with clause is not applicable
for sequences. I feel that all the above statements are ok with the
documentation mentioned.

Regarding the comments from [1].

5)
In LogicalRepSyncSequences, why are we allocating it in a permanent
memory context?

+ /* Allocate the tracking info in a permanent memory context. */
+ oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+ foreach_ptr(SubscriptionRelState, seq_state, sequences)
+ {
+ SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+ memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+ sequences_not_synced = lappend(sequences_not_synced, rstate);
+ }
+ MemoryContextSwitchTo(oldctx);

Same for 'seq_info' allocation.

When we are in between a transaction we will be using
TopTransactionContext. We can palloc() in TopTransactionContext and
safely use that memory throughout the transaction. But we cannot
cannot access memory allocated in TopTransactionContext after
CommitTransaction() finishes, because TopTransactionContext is
explicitly reset (or deleted) at the end of the transaction.
This is the reason we have to use CacheMemoryContext here.

Okay. I see. Thanks for the details. Can we add this info in comments,
something like:
Allocate the tracking information in a permanent memory context to
ensure it remains accessible across multiple transactions during the
sequence copy process. The memory will be released once the copy is
finished.

The rest of the comments are fixed.

Thank you for the patches. I am not done with review yet, but please
find the comments so far:

1)
LogicalRepSyncSequences()

+ /* Allocate the sequences information in a permanent memory context. */
+ oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+ /* Get the sequences that should be synchronized. */
+ subsequences = GetSubscriptionRelations(subid, false, true, true);

Here too we are using CacheMemoryContext? 'subsequences' is only used
in given function and not in copy_sequeneces. I guess, we start
multiple transactions in given function and thus make it essential to
allocate subsequences in CacheMemoryContext. But why are we doing
StartTransactionCommand twice? Is this intentional? Can we do it once
before GetSubscriptionRelations() and commit it once after for-loop is
over? IIUC, then we will not need CacheMemoryContext here.

2)
logicalrep_seqsyncworker_set_failuretime()

a) This function is extern'ed in worker_internal.h. But I do not see
its usage outside launcher.c. Is it a mistake?

b) Also, do we really need logicalrep_seqsyncworker_set_failuretime?
Is it better to move its logic instead in
logicalrep_seqsyncworker_set_failuretime()?

3)
SyncFetchRelationStates:
Earlier the name was FetchTableStates. If we really want to use the
'Sync' keyword, we can name it FetchRelationSyncStates, as we are
fetching sync-status only. Thoughts?

4)
ProcessSyncingSequencesForApply():
+
+ if (rstate->state != SUBREL_STATE_INIT)
+ continue;

Why do we expect that rstate->state is not INIT at this time when we
have fetched only INIT sequences in sequence_states_not_ready. Shall
this be assert? If there is a valid scenario where it can be READY
here, please add comments.

5)
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+ logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+ MyLogicalRepWorker->dbid,
+ MySubscription->oid,
+ MySubscription->name,
+ MyLogicalRepWorker->userid,
+ InvalidOid,
+ DSM_HANDLE_INVALID);
+ break;
+ }

We set sequencesync_failure_time to 0, but if logicalrep_worker_launch
is not able to launch the worker due to some reason, next time it will
not even wait for 'wal_retrieve_retry_interval time' to attempt
restarting it again. Is that intentional?

In other workflows such as while launching table-sync or apply worker,
this scenario does not arise. This is because we maintain start_time
there which can never be 0 instead of failure time and before
attempting to start the workers, we set start_time to current time.
The seq-sync failure-time OTOH is only set to non-null in
logicalrep_seqsyncworker_failure() and it is not necessary that we
will hit that function as the logicalrep_worker_launch() may fail
before that itself. Do you think we shall maintain start-time instead
of failure-time for seq-sync worker as well? Or is there any other way
to handle it?

thanks
Shveta

#256shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#255)
Re: Logical Replication of sequences

On Mon, Jul 7, 2025 at 2:37 PM shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Jul 4, 2025 at 3:53 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 1 Jul 2025 at 15:20, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jun 30, 2025 at 3:21 PM Nisha Moond <nisha.moond412@gmail.com> wrote:

Please find the attached v20250630 patch set addressing above comments
and other comments in [1],[2],[3] and [4].

Thanks for the patches. I am still in process of reviewing it but
please find few comments:

1)
+ if (pset.sversion >= 180000)
+ appendPQExpBuffer(&buf,
+   ",\n  puballsequences AS \"%s\"",
+   gettext_noop("All sequences"));

The server version check throughout the patch can be modified to 19000
as a new branch is created now.

Modified

2)
+ bool all_pub; /* Special publication for all tables,
+ * sequecnes */

a) Typo: sequecnes --> sequences

Modified

b) It is not clear from the comment that when will it be true? Will it
be set when either of all-tables or all-sequences is given or does it
need both?

Updated the comments

3)
postgres=# create publication pub1 for all sequences WITH ( PUBLISH='delete');
CREATE PUBLICATION
postgres=# create publication pub2 for all tables, sequences WITH
(PUBLISH='update');
CREATE PUBLICATION

For the first command, 'WITH ( publication_parameter..' is useless.
For the second command, it is applicable only for 'all tables'.

a) I am not sure if we even allow WITH in the first command?
b) In the second command, even if we allow it, there should be some
sort of NOTICE informing that it is applicable only to 'TABLES'.

Thoughts?

Also we allowed altering publication_parameter for all-sequences publication:

postgres=# alter publication pub1 set (publish='insert,update');
ALTER PUBLICATION

c) Should this be restricted as well? Thoughts?

We have documented that the parameter of with clause is not applicable
for sequences. I feel that all the above statements are ok with the
documentation mentioned.

Regarding the comments from [1].

5)
In LogicalRepSyncSequences, why are we allocating it in a permanent
memory context?

+ /* Allocate the tracking info in a permanent memory context. */
+ oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+ foreach_ptr(SubscriptionRelState, seq_state, sequences)
+ {
+ SubscriptionRelState *rstate = palloc(sizeof(SubscriptionRelState));
+
+ memcpy(rstate, seq_state, sizeof(SubscriptionRelState));
+ sequences_not_synced = lappend(sequences_not_synced, rstate);
+ }
+ MemoryContextSwitchTo(oldctx);

Same for 'seq_info' allocation.

When we are in between a transaction we will be using
TopTransactionContext. We can palloc() in TopTransactionContext and
safely use that memory throughout the transaction. But we cannot
cannot access memory allocated in TopTransactionContext after
CommitTransaction() finishes, because TopTransactionContext is
explicitly reset (or deleted) at the end of the transaction.
This is the reason we have to use CacheMemoryContext here.

Okay. I see. Thanks for the details. Can we add this info in comments,
something like:
Allocate the tracking information in a permanent memory context to
ensure it remains accessible across multiple transactions during the
sequence copy process. The memory will be released once the copy is
finished.

The rest of the comments are fixed.

Thank you for the patches. I am not done with review yet, but please
find the comments so far:

1)
LogicalRepSyncSequences()

+ /* Allocate the sequences information in a permanent memory context. */
+ oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+ /* Get the sequences that should be synchronized. */
+ subsequences = GetSubscriptionRelations(subid, false, true, true);

Here too we are using CacheMemoryContext? 'subsequences' is only used
in given function and not in copy_sequeneces. I guess, we start
multiple transactions in given function and thus make it essential to
allocate subsequences in CacheMemoryContext. But why are we doing
StartTransactionCommand twice? Is this intentional? Can we do it once
before GetSubscriptionRelations() and commit it once after for-loop is
over? IIUC, then we will not need CacheMemoryContext here.

2)
logicalrep_seqsyncworker_set_failuretime()

a) This function is extern'ed in worker_internal.h. But I do not see
its usage outside launcher.c. Is it a mistake?

b) Also, do we really need logicalrep_seqsyncworker_set_failuretime?
Is it better to move its logic instead in
logicalrep_seqsyncworker_set_failuretime()?

3)
SyncFetchRelationStates:
Earlier the name was FetchTableStates. If we really want to use the
'Sync' keyword, we can name it FetchRelationSyncStates, as we are
fetching sync-status only. Thoughts?

4)
ProcessSyncingSequencesForApply():
+
+ if (rstate->state != SUBREL_STATE_INIT)
+ continue;

Why do we expect that rstate->state is not INIT at this time when we
have fetched only INIT sequences in sequence_states_not_ready. Shall
this be assert? If there is a valid scenario where it can be READY
here, please add comments.

5)
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+ logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+ MyLogicalRepWorker->dbid,
+ MySubscription->oid,
+ MySubscription->name,
+ MyLogicalRepWorker->userid,
+ InvalidOid,
+ DSM_HANDLE_INVALID);
+ break;
+ }

We set sequencesync_failure_time to 0, but if logicalrep_worker_launch
is not able to launch the worker due to some reason, next time it will
not even wait for 'wal_retrieve_retry_interval time' to attempt
restarting it again. Is that intentional?

In other workflows such as while launching table-sync or apply worker,
this scenario does not arise. This is because we maintain start_time
there which can never be 0 instead of failure time and before
attempting to start the workers, we set start_time to current time.
The seq-sync failure-time OTOH is only set to non-null in
logicalrep_seqsyncworker_failure() and it is not necessary that we
will hit that function as the logicalrep_worker_launch() may fail
before that itself. Do you think we shall maintain start-time instead
of failure-time for seq-sync worker as well? Or is there any other way
to handle it?

I thought about this more. Another idea could be to capture the return
value of logicalrep_worker_launch() and if it is false, then we can
set failure_time to current time. Thoughts?

thanks
Shveta

#257shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#256)
Re: Logical Replication of sequences

Please find a few more comments on July4 patch

6)
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>

This sentence looks odd, as we have 'first' but no follow-up sentence
after that. Can we please combine this line with the next one in the
doc saying:

To synchronize sequences from a publisher to a subscriber, first
publish them using CREATE PUBLICATION ... FOR ALL SEQUENCES and then
at the subscriber side:

7)

+         <para>
+          This parameter is not applicable for sequences.
+         </para>

It is mentioned 3 times in doc for publish, publish_generated_columns
and publish_via_partition_root. Instead shall we mention it once for
WITH-clause itself. Something like:

This clause specifies optional parameters for a publication when
publishing tables. This clause is not applicable for sequences.

8)
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.

Why not:
"The view pg_publication_sequences provides information about the
mapping between publications and sequences."

I think the existing detail has been written similar to
'pg_publication_tables' doc. But there, 'information of tables' made
sense as pg_publication_tables has attnames and rowfilters too. But
pg_publication_sequences OTOH just has
the mapping between names. No other information.

9)
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared.

Now in code, we give WARNING for missing sequences on publisher as
well. Do we need to mention that here? IIUC, this WARNING for missing
sequences can come only if the worker is respawned to sync
unmatched/failed sequences and meanwhile any one of failed sequences
is dropped on publisher. But it will be good to mention it briefly in
doc.

thanks
Shveta

#258vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#255)
6 attachment(s)
Re: Logical Replication of sequences

On Mon, 7 Jul 2025 at 14:37, shveta malik <shveta.malik@gmail.com> wrote:

When we are in between a transaction we will be using
TopTransactionContext. We can palloc() in TopTransactionContext and
safely use that memory throughout the transaction. But we cannot
cannot access memory allocated in TopTransactionContext after
CommitTransaction() finishes, because TopTransactionContext is
explicitly reset (or deleted) at the end of the transaction.
This is the reason we have to use CacheMemoryContext here.

Okay. I see. Thanks for the details. Can we add this info in comments,
something like:
Allocate the tracking information in a permanent memory context to
ensure it remains accessible across multiple transactions during the
sequence copy process. The memory will be released once the copy is
finished.

I felt the existing is ok as it is mentioned similarly in
FetchRelationStates too. I don't want to keep it different in
different places.

1)
LogicalRepSyncSequences()

+ /* Allocate the sequences information in a permanent memory context. */
+ oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+ /* Get the sequences that should be synchronized. */
+ subsequences = GetSubscriptionRelations(subid, false, true, true);

Here too we are using CacheMemoryContext? 'subsequences' is only used
in given function and not in copy_sequeneces. I guess, we start
multiple transactions in given function and thus make it essential to
allocate subsequences in CacheMemoryContext. But why are we doing
StartTransactionCommand twice? Is this intentional? Can we do it once
before GetSubscriptionRelations() and commit it once after for-loop is
over? IIUC, then we will not need CacheMemoryContext here.

Modified

2)
logicalrep_seqsyncworker_set_failuretime()

a) This function is extern'ed in worker_internal.h. But I do not see
its usage outside launcher.c. Is it a mistake?

This is removed now as part of the next comment fix

b) Also, do we really need logicalrep_seqsyncworker_set_failuretime?
Is it better to move its logic instead in
logicalrep_seqsyncworker_set_failuretime()?

Modified

3)
SyncFetchRelationStates:
Earlier the name was FetchTableStates. If we really want to use the
'Sync' keyword, we can name it FetchRelationSyncStates, as we are
fetching sync-status only. Thoughts?

Instead of FetchRelationSyncStates, I preferred FetchRelationStates,
and changed it to FetchRelationStates.

4)
ProcessSyncingSequencesForApply():
+
+ if (rstate->state != SUBREL_STATE_INIT)
+ continue;

Why do we expect that rstate->state is not INIT at this time when we
have fetched only INIT sequences in sequence_states_not_ready. Shall
this be assert? If there is a valid scenario where it can be READY
here, please add comments.

Modified to Assert

5)
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+ logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+ MyLogicalRepWorker->dbid,
+ MySubscription->oid,
+ MySubscription->name,
+ MyLogicalRepWorker->userid,
+ InvalidOid,
+ DSM_HANDLE_INVALID);
+ break;
+ }

We set sequencesync_failure_time to 0, but if logicalrep_worker_launch
is not able to launch the worker due to some reason, next time it will
not even wait for 'wal_retrieve_retry_interval time' to attempt
restarting it again. Is that intentional?

In other workflows such as while launching table-sync or apply worker,
this scenario does not arise. This is because we maintain start_time
there which can never be 0 instead of failure time and before
attempting to start the workers, we set start_time to current time.
The seq-sync failure-time OTOH is only set to non-null in
logicalrep_seqsyncworker_failure() and it is not necessary that we
will hit that function as the logicalrep_worker_launch() may fail
before that itself. Do you think we shall maintain start-time instead
of failure-time for seq-sync worker as well? Or is there any other way
to handle it?

I preferred the suggestion from [1]/messages/by-id/CAJpy0uA6ugJN+tBwv38mG+6vf-uMuQoqxaZCX-mg1qBWX+=Bkw@mail.gmail.com. Modified it accordingly.

The attached v20250709 version patch has the changes for the same.

[1]: /messages/by-id/CAJpy0uA6ugJN+tBwv38mG+6vf-uMuQoqxaZCX-mg1qBWX+=Bkw@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250709-0005-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250709-0005-New-worker-for-sequence-synchronization-du.patchDownload
From c7c436b3ef7ffa10151d9f3c1fd04a02ee9d115c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 9 Jul 2025 14:58:11 +0530
Subject: [PATCH v20250709 5/6] New worker for sequence synchronization during 
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  62 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 658 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  70 +-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 228 ++++++
 src/tools/pgindent/typedefs.list              |   1 +
 24 files changed, 1180 insertions(+), 110 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index ebd5605afe3..b8f415cd50d 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -337,7 +337,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index cf555526dbb..40c82420118 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1396,6 +1396,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index d051adf4931..4d03704f39b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index bd575237d5d..fb410c5e503 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1066,7 +1066,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -1870,7 +1870,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 4aed0dfcebb..5df81cbec82 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -239,19 +239,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -261,7 +260,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -321,6 +320,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -406,7 +406,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -492,8 +493,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -616,13 +625,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -689,7 +698,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -819,6 +828,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -867,7 +898,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1182,7 +1213,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1329,7 +1360,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1369,6 +1400,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..c12394c8cc2
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,658 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Walk over all subscription sequences that are individually tracked by the
+ * apply process (currently, all that have state SUBREL_STATE_INIT) and manage
+ * synchronization for them.
+ *
+ * If a sequencesync worker is running already, there is no need to start a new
+ * one; the existing sequencesync worker will synchronize all the sequences. If
+ * there are still any sequences to be synced after the sequencesync worker
+ * exited, then a new sequencesync worker can be started in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	bool		started_tx = false;
+
+	Assert(!IsTransactionState());
+
+	/* Start the sequencesync worker if needed, and there is not one already. */
+	foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+	{
+		LogicalRepWorker *sequencesync_worker;
+		int			nsyncworkers;
+
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE);
+		Assert(rstate->state == SUBREL_STATE_INIT);
+
+		/*
+		 * Check if there is a sequencesync worker already running?
+		 */
+		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+		sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+													 InvalidOid,
+													 WORKERTYPE_SEQUENCESYNC,
+													 true);
+		if (sequencesync_worker)
+		{
+			/* Now safe to release the LWLock */
+			LWLockRelease(LogicalRepWorkerLock);
+			break;
+		}
+
+		/*
+		 * Count running sync workers for this subscription, while we have the
+		 * lock.
+		 */
+		nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+
+		/*
+		 * If there is a free sync worker slot, start a new sequencesync
+		 * worker, and break from the loop.
+		 */
+		if (nsyncworkers < max_sync_workers_per_subscription)
+		{
+			TimestampTz now = GetCurrentTimestamp();
+
+			/*
+			 * To prevent starting the sequencesync worker at a high frequency
+			 * after a failure, we store its last failure time. We start the
+			 * sequencesync worker again after waiting at least
+			 * wal_retrieve_retry_interval.
+			 */
+			if (!MyLogicalRepWorker->sequencesync_failure_time ||
+				TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+										   now, wal_retrieve_retry_interval))
+			{
+				MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+				if (!logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											  MyLogicalRepWorker->dbid,
+											  MySubscription->oid,
+											  MySubscription->name,
+											  MyLogicalRepWorker->userid,
+											  InvalidOid,
+											  DSM_HANDLE_INVALID))
+					MyLogicalRepWorker->sequencesync_failure_time = now;
+
+				break;
+			}
+		}
+	}
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequences(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequences(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *subsequences;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfoData app_name;
+	List	   *sequences_to_copy = NIL;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	subsequences = GetSubscriptionRelations(subid, false, true, true);
+	seq_count = list_length(subsequences);
+
+	foreach_ptr(SubscriptionRelState, subseq, subsequences)
+	{
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(subseq->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subseq->relid;
+		seq_info->remote_seq_fetched = false;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	list_free_deep(sequences_to_copy);
+
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..8914f5cca10 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,12 +174,14 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +194,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +223,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b66ac6eb865..6cf910c6ea6 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1248,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1521,7 +1521,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1566,7 +1567,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1574,7 +1575,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1588,23 +1589,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 39a53c84e04..c49b025f16a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -482,6 +482,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1022,7 +1027,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1144,7 +1152,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1200,7 +1211,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1266,7 +1280,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1401,7 +1418,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2243,7 +2263,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3720,7 +3743,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -4529,7 +4555,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4649,8 +4676,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4729,6 +4756,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4748,14 +4779,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4800,6 +4834,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  InvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4826,6 +4863,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4838,9 +4879,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 511dc32d519..4be4c5d1b89 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d38780e2678..b919b1fd653 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5696,9 +5696,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 663b87a9c80..2373968ff0d 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,11 +271,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -332,15 +341,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..8b0e3bc76d3
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,228 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequences\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 67de8beeaf2..59c537393c8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1630,6 +1630,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250709-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250709-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 3f707982be1871c955babc5bea91a87b22a9c9b1 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250709 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 231 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..ee98922c237 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e4fd6347fd1..b66ac6eb865 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -664,37 +612,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1331,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1572,77 +1489,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1728,7 +1574,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1746,7 +1592,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index c5fb627aa56..39a53c84e04 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1023,7 +1023,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1145,7 +1145,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1201,7 +1201,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1267,7 +1267,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1402,7 +1402,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2244,7 +2244,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3721,7 +3721,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4798,7 +4798,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..663b87a9c80 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 11454bf8a57..67de8beeaf2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2911,7 +2911,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250709-0001-Introduce-pg_sequence_state-function-for-e.patchapplication/octet-stream; name=v20250709-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 057bc777c65da0f4427b88314186021e7846a6bb Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:12:45 +0530
Subject: [PATCH v20250709 1/6] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
In subsequent patches, this function will be used to fetch the
sequence states from the publisher in order to synchronize them on
the subscriber.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index c28aa71f570..97bbc7367e9 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..d051adf4931 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* Open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	sequence_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d4650947c63..74c34ca4db8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250709-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250709-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 3b5aacf30198386095311c1d7bd2d326c1e2df8e Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:14:18 +0530
Subject: [PATCH v20250709 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.
---
 src/backend/catalog/pg_publication.c      |   4 +-
 src/backend/commands/publicationcmds.c    | 134 ++++--
 src/backend/parser/gram.y                 |  38 +-
 src/bin/pg_dump/pg_dump.c                 |  19 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  19 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 558 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 707 insertions(+), 350 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..dc3f9ed3fbf 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1084,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..572ec057ad8 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -72,6 +72,8 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 								  AlterPublicationStmt *stmt);
 static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
 static char defGetGeneratedColsOption(DefElem *def);
+static void process_all_objtype_list(List *all_objects_list, bool *all_tables,
+									 bool *all_sequences);
 
 
 static void
@@ -820,6 +822,41 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 	}
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pubobjtype has been specified more than once.
+ */
+static void
+process_all_objtype_list(List *all_objects_list, bool *all_tables,
+						 bool *all_sequences)
+{
+	Assert(all_objects_list);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Create new publication.
  */
@@ -841,6 +878,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	AclResult	aclresult;
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
+	bool		all_tables = false;
+	bool		all_sequences = false;
 
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
@@ -848,11 +887,22 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (stmt->puballobj)
+		process_all_objtype_list(stmt->pubobjects,
+								 &all_tables,
+								 &all_sequences);
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -884,8 +934,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
-	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballtables - 1] = all_tables;
+	values[Anum_pg_publication_puballsequences - 1] = all_sequences;
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -913,12 +963,12 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (all_tables)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1440,6 +1490,8 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
+
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1504,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -1902,6 +1961,13 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
 					errcode(ERRCODE_SYNTAX_ERROR),
 					errmsg("column list must not be specified in ALTER PUBLICATION ... DROP"));
 
+		if (RelationGetForm(rel)->relkind == RELKIND_SEQUENCE)
+			ereport(ERROR,
+					errcode(ERRCODE_UNDEFINED_OBJECT),
+					errmsg("relation \"%s\" is not part of the publication",
+						   RelationGetRelationName(rel)),
+					errdetail_relkind_not_supported(RelationGetForm(rel)->relkind));
+
 		prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
 							   ObjectIdGetDatum(relid),
 							   ObjectIdGetDatum(pubid));
@@ -2019,19 +2085,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 70a0d832a11..9ffe2c38f83 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -259,6 +259,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +585,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10670,7 +10672,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10690,13 +10697,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					n->pubobjects = (List *) $6;
+					n->puballobj = true;
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10808,6 +10816,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 1937997ea67..9a4f86b8457 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4392,6 +4392,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4422,9 +4423,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4440,6 +4446,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4460,6 +4467,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4511,8 +4520,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 39eef1d6617..b1a6a08f52b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -662,6 +662,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 2485d8f360e..b3bfbdc82bc 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3302,6 +3302,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index dd25d2fe7b8..3035b24f26f 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 53e7d35fe98..a6f1602d54c 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3545,12 +3545,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 28e2e8dc0fd..86f95b55419 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4290,13 +4290,30 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		puballobj;		/* True if the publication is for all tables,
+								 * all sequences, or both */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3a2eacd793f..749c316a107 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,92 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +320,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +338,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +370,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +386,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +405,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +416,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +452,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +465,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +583,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +878,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1071,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1282,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1325,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1340,10 +1408,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1421,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1450,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1476,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1547,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1558,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1579,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1591,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1603,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1614,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1636,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1667,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1679,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1761,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1782,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1917,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c9e309190df..1cf013e72f6 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 83192038571..11454bf8a57 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2349,6 +2349,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250709-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250709-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From bbb20391f2b75f01fd3b07e70ab1474f75aac5ad Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 9 Jun 2025 20:18:54 +0530
Subject: [PATCH v20250709 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 373 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/pg_dump/common.c                    |   4 +-
 src/bin/pg_dump/pg_dump.c                   |   8 +-
 src/bin/pg_dump/pg_dump.h                   |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 17 files changed, 472 insertions(+), 118 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index dc3f9ed3fbf..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1062,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1334,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..ebd5605afe3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index e5dbbe61b81..cf555526dbb 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e23b0de7242..bd575237d5d 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating a
+	 * replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +713,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +727,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +742,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +768,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +797,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,18 +836,55 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -835,10 +892,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,16 +922,23 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -880,22 +951,18 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -904,12 +971,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,28 +988,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1051,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1006,10 +1108,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1023,11 +1125,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1393,8 +1497,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1512,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1553,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1572,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1596,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1609,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1773,7 +1893,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2247,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2432,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9ffe2c38f83..3d5ed0e2f08 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10947,11 +10947,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9a4f86b8457..64ce57f57e8 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5144,12 +5144,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5203,7 +5203,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b1a6a08f52b..1ceb25bdcde 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -819,6 +819,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index a6f1602d54c..e3989e93c47 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2296,7 +2296,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 74c34ca4db8..d38780e2678 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12278,6 +12278,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..1af265aa174 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86f95b55419..32841da9dde 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4357,7 +4357,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cf828ca8d0..9623240915c 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1458,6 +1458,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2171,6 +2179,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2181,7 +2190,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 529b2241731..14dad19158b 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250709-0006-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250709-0006-Documentation-for-sequence-synchronization.patchDownload
From b117fe8255e8982713775805e33cacbf4fe11afb Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250709 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 244 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  84 ++++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  67 ++++++
 8 files changed, 455 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index aa5b8772436..e1f9af82a46 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8149,16 +8149,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8192,7 +8195,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8201,12 +8204,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 59a0874528a..fbba9c65fa6 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5186,9 +5186,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5329,8 +5329,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5353,10 +5353,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index f317ed9c50e..0e6345f85b1 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1711,6 +1715,204 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>
+
+  <para>
+   At the subscriber side:
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared. A WARNING is logged listing all differing
+     sequences before the process exits. The apply worker detects the failure
+     and repeatedly respawns the sequence synchronization worker to continue
+     the synchronization process until all differences are resolved. See also
+     <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+    </para>
+   </warning>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+    Then, execute <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2040,16 +2242,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2367,8 +2572,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2381,8 +2586,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..8fa27144da8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..8c794d9b8d0 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,20 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | ALL <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +126,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,6 +157,31 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
@@ -188,6 +209,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           for logical replication does not take this parameter into account when
           copying existing table data.
          </para>
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -224,6 +248,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           See <xref linkend="logical-replication-gencols"/> for more details about
           logical replication of generated columns.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -259,6 +287,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
           If this is enabled, <literal>TRUNCATE</literal> operations performed
           directly on partitions are not replicated.
          </para>
+
+         <para>
+          This parameter is not applicable for sequences.
+         </para>
         </listitem>
        </varlistentry>
       </variablelist></para>
@@ -279,10 +311,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +330,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +482,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index e1ac544ee40..5c1f74eea81 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -131,6 +131,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2475,6 +2480,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#259Nisha Moond
nisha.moond412@gmail.com
In reply to: vignesh C (#258)
Re: Logical Replication of sequences

On Wed, Jul 9, 2025 at 4:11 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20250709 version patch has the changes for the same.

Thanks for the patches.

In Patch-004: sequencesync.c : I think below function logic can be simplified.

+void
+ProcessSyncingSequencesForApply(void)
+{
+ bool started_tx = false;
+
+ Assert(!IsTransactionState());
+
+ /* Start the sequencesync worker if needed, and there is not one already. */
+ foreach_ptr(SubscriptionRelState, rstate, sequence_states_not_ready)
+ {
...

Currently, we loop through all INIT sequences to start a sequencesync
worker. But since a single worker handles synchronization for all the
sequences in list "sequence_states_not_ready", iterating through the
entire list may lead to unnecessary work in cases like:

a) when no sync worker slots are available (e.g., nsyncworkers ==
max_sync_workers_per_subscription), or
b) when sequencesync_failure_time hasn't yet elapsed.

We could instead check if the list is non-empty (or use a simple bool)
and attempt to start the worker. If it can’t be started, we can try
again in the next apply loop.
Thoughts?

--
Thanks.
Nisha

#260shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#258)
Re: Logical Replication of sequences

On Wed, Jul 9, 2025 at 4:11 PM vignesh C <vignesh21@gmail.com> wrote:

3)
SyncFetchRelationStates:
Earlier the name was FetchTableStates. If we really want to use the
'Sync' keyword, we can name it FetchRelationSyncStates, as we are
fetching sync-status only. Thoughts?

Instead of FetchRelationSyncStates, I preferred FetchRelationStates,
and changed it to FetchRelationStates.

Okay, LGTM.

5)
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+ logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+ MyLogicalRepWorker->dbid,
+ MySubscription->oid,
+ MySubscription->name,
+ MyLogicalRepWorker->userid,
+ InvalidOid,
+ DSM_HANDLE_INVALID);
+ break;
+ }

We set sequencesync_failure_time to 0, but if logicalrep_worker_launch
is not able to launch the worker due to some reason, next time it will
not even wait for 'wal_retrieve_retry_interval time' to attempt
restarting it again. Is that intentional?

In other workflows such as while launching table-sync or apply worker,
this scenario does not arise. This is because we maintain start_time
there which can never be 0 instead of failure time and before
attempting to start the workers, we set start_time to current time.
The seq-sync failure-time OTOH is only set to non-null in
logicalrep_seqsyncworker_failure() and it is not necessary that we
will hit that function as the logicalrep_worker_launch() may fail
before that itself. Do you think we shall maintain start-time instead
of failure-time for seq-sync worker as well? Or is there any other way
to handle it?

I preferred the suggestion from [1]. Modified it accordingly.

Okay, works for me.

The attached v20250709 version patch has the changes for the same.

Thank You for the patches. Please find a few comments:

1)
Shall we update pg_publication doc as well to indicate that pubinsert,
pubupdate, pubdelete , pubtruncate, pubviaroot are meaningful only
when publishing tables. For sequences, these have no meaning.

2)
Shall we have walrcv_disconnect() after copy is done in
LogicalRepSyncSequences()

3)
Do we really need for-loop in ProcessSyncingSequencesForApply? I think
this function is inspired from ProcessSyncingTablesForApply() but
there we need different tablesync workers for different tables. For
sequences, that is not the case and thus for-loop can be omitted. If
we do so, we can amend the comments too where it says " Walk over all
subscription sequences....."

4)
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(

We can drop regress_seq_sub on the publisher now and check for missing
warnings as the next step.

5)
I am revisiting the test given in [1]/messages/by-id/CAJpy0uDK-QOULpd6x+isGrzwWyn16HHF0UPWqLGtOXQ-Z5M=yQ@mail.gmail.com, I see there is some document change as:

+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link
linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.

But this doc specifically mentions a failover case. It does not
mention the case presented in [1]/messages/by-id/CAJpy0uDK-QOULpd6x+isGrzwWyn16HHF0UPWqLGtOXQ-Z5M=yQ@mail.gmail.com i.e. if user is trying to use
sequence to populate identity column of a "subscribed" table where the
sequence is also synced originally from publisher, then he may end up
with corrupted state
of IDENTITY column, and thus such cases should be used with caution.
Please review once and see if we need to mention this and the example
too.

[1]: /messages/by-id/CAJpy0uDK-QOULpd6x+isGrzwWyn16HHF0UPWqLGtOXQ-Z5M=yQ@mail.gmail.com

thanks
Shveta

#261vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#257)
6 attachment(s)
Re: Logical Replication of sequences

On Wed, 9 Jul 2025 at 14:12, shveta malik <shveta.malik@gmail.com> wrote:

Please find a few more comments on July4 patch

6)
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link>.
+  </para>

This sentence looks odd, as we have 'first' but no follow-up sentence
after that. Can we please combine this line with the next one in the
doc saying:

To synchronize sequences from a publisher to a subscriber, first
publish them using CREATE PUBLICATION ... FOR ALL SEQUENCES and then
at the subscriber side:

Modified

7)

+         <para>
+          This parameter is not applicable for sequences.
+         </para>

It is mentioned 3 times in doc for publish, publish_generated_columns
and publish_via_partition_root. Instead shall we mention it once for
WITH-clause itself. Something like:

This clause specifies optional parameters for a publication when
publishing tables. This clause is not applicable for sequences.

Modified

8)
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and information of
+   sequences they contain.

Why not:
"The view pg_publication_sequences provides information about the
mapping between publications and sequences."

I think the existing detail has been written similar to
'pg_publication_tables' doc. But there, 'information of tables' made
sense as pg_publication_tables has attnames and rowfilters too. But
pg_publication_sequences OTOH just has
the mapping between names. No other information.

Modified

9)
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <warning>
+    <para>
+     During sequence synchronization, the sequence definitions of the publisher
+     and the subscriber are compared.

Now in code, we give WARNING for missing sequences on publisher as
well. Do we need to mention that here? IIUC, this WARNING for missing
sequences can come only if the worker is respawned to sync
unmatched/failed sequences and meanwhile any one of failed sequences
is dropped on publisher. But it will be good to mention it briefly in
doc.

Modified

Also the comment from [1]/messages/by-id/CABdArM640YF7MQfMVhEX=e1pJdrnVcCwS_y4XXsbvah=6P9S=A@mail.gmail.com is handled.
The attached v20250711 version patch has the changes for the same.

[1]: /messages/by-id/CABdArM640YF7MQfMVhEX=e1pJdrnVcCwS_y4XXsbvah=6P9S=A@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250711-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250711-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From fd7352f6dc71c31f4a9ccbf0bf73c08cc7fe0df6 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250711 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 231 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..ee98922c237 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e4fd6347fd1..b66ac6eb865 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -664,37 +612,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1331,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1572,77 +1489,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1728,7 +1574,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1746,7 +1592,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index c5fb627aa56..39a53c84e04 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1023,7 +1023,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1145,7 +1145,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1201,7 +1201,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1267,7 +1267,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1402,7 +1402,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2244,7 +2244,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3721,7 +3721,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4798,7 +4798,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..663b87a9c80 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 11454bf8a57..67de8beeaf2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2911,7 +2911,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250711-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250711-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From d845c60c3244f93d8c3745c23bb59ff53ffc471c Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:12:45 +0530
Subject: [PATCH v20250711 1/6] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
In subsequent patches, this function will be used to fetch the
sequence states from the publisher in order to synchronize them on
the subscriber.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index c28aa71f570..97bbc7367e9 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..d051adf4931 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* Open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	sequence_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 1fc19146f46..96779df2941 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250711-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250711-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From acbef37a381c6245cae290d1341fe852d29aa385 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 9 Jun 2025 20:18:54 +0530
Subject: [PATCH v20250711 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 373 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/pg_dump/common.c                    |   4 +-
 src/bin/pg_dump/pg_dump.c                   |   8 +-
 src/bin/pg_dump/pg_dump.h                   |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 17 files changed, 472 insertions(+), 118 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index dc3f9ed3fbf..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1062,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1334,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..ebd5605afe3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index b2d5332effc..43c3d9c2975 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e23b0de7242..bd575237d5d 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating a
+	 * replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +713,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +727,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +742,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +768,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +797,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,18 +836,55 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -835,10 +892,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,16 +922,23 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -880,22 +951,18 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -904,12 +971,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,28 +988,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1051,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1006,10 +1108,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1023,11 +1125,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1393,8 +1497,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1512,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1553,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1572,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1596,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1609,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1773,7 +1893,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2247,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2432,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9ffe2c38f83..3d5ed0e2f08 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10947,11 +10947,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9a4f86b8457..64ce57f57e8 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5144,12 +5144,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5203,7 +5203,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b1a6a08f52b..1ceb25bdcde 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -819,6 +819,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 9bb805f947e..d67a92d8707 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2309,7 +2309,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 96779df2941..2bbcdbb4afa 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12286,6 +12286,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..1af265aa174 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86f95b55419..32841da9dde 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4357,7 +4357,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index dce8c672b40..8b2c407ccdb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2175,6 +2183,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2185,7 +2194,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 529b2241731..14dad19158b 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250711-0005-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250711-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 36a8385f188fd895ab418894ad1e0969efa9c15c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 9 Jul 2025 14:58:11 +0530
Subject: [PATCH v20250711 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  62 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 631 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  70 +-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 228 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 24 files changed, 1153 insertions(+), 110 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index ebd5605afe3..b8f415cd50d 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -337,7 +337,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 43c3d9c2975..485f6be15b7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1404,6 +1404,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index d051adf4931..4d03704f39b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index bd575237d5d..fb410c5e503 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1066,7 +1066,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -1870,7 +1870,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 4aed0dfcebb..5df81cbec82 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -239,19 +239,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -261,7 +260,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -321,6 +320,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -406,7 +406,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -492,8 +493,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -616,13 +625,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -689,7 +698,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -819,6 +828,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -867,7 +898,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1182,7 +1213,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1329,7 +1360,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1369,6 +1400,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..eedaffa7b95
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,631 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	/* No sequences to sync, so nothing to do */
+	if (list_length(sequence_states_not_ready) == 0)
+		return;
+	
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											     InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+	/* Now safe to release the LWLock */
+	LWLockRelease(LogicalRepWorkerLock);
+
+	/*
+	 * If there is a free sync worker slot, start a new sequencesync
+	 * worker, and break from the loop.
+	 */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		/*
+		 * To prevent starting the sequencesync worker at a high frequency
+		 * after a failure, we store its last failure time. We start the
+		 * sequencesync worker again after waiting at least
+		 * wal_retrieve_retry_interval.
+		 */
+		if (!MyLogicalRepWorker->sequencesync_failure_time ||
+			TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+									   now, wal_retrieve_retry_interval))
+		{
+			MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+			if (!logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											InvalidOid,
+											DSM_HANDLE_INVALID))
+				MyLogicalRepWorker->sequencesync_failure_time = now;
+		}
+	}
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequences(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequences(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *subsequences;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfoData app_name;
+	List	   *sequences_to_copy = NIL;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	subsequences = GetSubscriptionRelations(subid, false, true, true);
+	seq_count = list_length(subsequences);
+
+	foreach_ptr(SubscriptionRelState, subseq, subsequences)
+	{
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(subseq->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subseq->relid;
+		seq_info->remote_seq_fetched = false;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	list_free_deep(sequences_to_copy);
+
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..8914f5cca10 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,12 +174,14 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +194,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +223,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b66ac6eb865..6cf910c6ea6 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1248,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1521,7 +1521,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1566,7 +1567,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1574,7 +1575,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1588,23 +1589,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 39a53c84e04..c49b025f16a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -482,6 +482,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1022,7 +1027,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1144,7 +1152,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1200,7 +1211,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1266,7 +1280,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1401,7 +1418,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2243,7 +2263,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3720,7 +3743,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -4529,7 +4555,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4649,8 +4676,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4729,6 +4756,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4748,14 +4779,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4800,6 +4834,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  InvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4826,6 +4863,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4838,9 +4879,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index a925be86944..a7ccfaa8bd9 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 2bbcdbb4afa..3bae4e6dc11 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5696,9 +5696,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 663b87a9c80..2373968ff0d 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,11 +271,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -332,15 +341,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..8b0e3bc76d3
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,228 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequences\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 67de8beeaf2..59c537393c8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1630,6 +1630,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250711-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250711-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From f825d81f6c8bb9056bc9860686b8fe37f17a8799 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:14:18 +0530
Subject: [PATCH v20250711 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.
---
 src/backend/catalog/pg_publication.c      |   4 +-
 src/backend/commands/publicationcmds.c    | 134 ++++--
 src/backend/parser/gram.y                 |  38 +-
 src/bin/pg_dump/pg_dump.c                 |  19 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  19 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 558 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 707 insertions(+), 350 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..dc3f9ed3fbf 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1084,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..572ec057ad8 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -72,6 +72,8 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 								  AlterPublicationStmt *stmt);
 static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
 static char defGetGeneratedColsOption(DefElem *def);
+static void process_all_objtype_list(List *all_objects_list, bool *all_tables,
+									 bool *all_sequences);
 
 
 static void
@@ -820,6 +822,41 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 	}
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pubobjtype has been specified more than once.
+ */
+static void
+process_all_objtype_list(List *all_objects_list, bool *all_tables,
+						 bool *all_sequences)
+{
+	Assert(all_objects_list);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Create new publication.
  */
@@ -841,6 +878,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	AclResult	aclresult;
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
+	bool		all_tables = false;
+	bool		all_sequences = false;
 
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
@@ -848,11 +887,22 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (stmt->puballobj)
+		process_all_objtype_list(stmt->pubobjects,
+								 &all_tables,
+								 &all_sequences);
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -884,8 +934,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
-	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballtables - 1] = all_tables;
+	values[Anum_pg_publication_puballsequences - 1] = all_sequences;
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -913,12 +963,12 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (all_tables)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1440,6 +1490,8 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
+
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1504,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -1902,6 +1961,13 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
 					errcode(ERRCODE_SYNTAX_ERROR),
 					errmsg("column list must not be specified in ALTER PUBLICATION ... DROP"));
 
+		if (RelationGetForm(rel)->relkind == RELKIND_SEQUENCE)
+			ereport(ERROR,
+					errcode(ERRCODE_UNDEFINED_OBJECT),
+					errmsg("relation \"%s\" is not part of the publication",
+						   RelationGetRelationName(rel)),
+					errdetail_relkind_not_supported(RelationGetForm(rel)->relkind));
+
 		prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
 							   ObjectIdGetDatum(relid),
 							   ObjectIdGetDatum(pubid));
@@ -2019,19 +2085,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 70a0d832a11..9ffe2c38f83 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -259,6 +259,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +585,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10670,7 +10672,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10690,13 +10697,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					n->pubobjects = (List *) $6;
+					n->puballobj = true;
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10808,6 +10816,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 1937997ea67..9a4f86b8457 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4392,6 +4392,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4422,9 +4423,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4440,6 +4446,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4460,6 +4467,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4511,8 +4520,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 39eef1d6617..b1a6a08f52b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -662,6 +662,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 2485d8f360e..b3bfbdc82bc 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3302,6 +3302,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index dd25d2fe7b8..3035b24f26f 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 5ba45a0bcb3..9bb805f947e 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3559,12 +3559,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 28e2e8dc0fd..86f95b55419 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4290,13 +4290,30 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		puballobj;		/* True if the publication is for all tables,
+								 * all sequences, or both */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3a2eacd793f..749c316a107 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,92 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +320,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +338,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +370,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +386,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +405,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +416,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +452,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +465,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +583,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +878,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1071,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1282,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1325,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1340,10 +1408,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1421,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1450,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1476,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1547,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1558,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1579,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1591,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1603,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1614,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1636,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1667,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1679,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1761,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1782,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1917,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c9e309190df..1cf013e72f6 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 83192038571..11454bf8a57 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2349,6 +2349,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250711-0006-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250711-0006-Documentation-for-sequence-synchronization.patchDownload
From b647d9502d38b092a45f9c99ee1cd3cc3f3f35ef Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250711 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 260 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  76 +++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 461 insertions(+), 61 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index aa5b8772436..e1f9af82a46 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8149,16 +8149,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8192,7 +8195,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8201,12 +8204,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c7acc0f182f..81c7afad3c8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5188,9 +5188,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5331,8 +5331,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5355,10 +5355,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index e26f7f59d4a..edfc66465a7 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,220 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher. An ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are created in the publisher.
+    See also <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, either use
+    <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>
+    to recreate the missing sequence on the publisher, or, if the sequence are
+    no longer required, execute <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+    to remove the stale sequence entries from synchronization in the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2072,16 +2290,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2399,8 +2620,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2413,8 +2634,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..8fa27144da8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..0969a2d7075 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,20 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | ALL <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +126,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +157,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +301,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +320,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +472,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#262vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#260)
6 attachment(s)
Re: Logical Replication of sequences

On Fri, 11 Jul 2025 at 14:26, shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Jul 9, 2025 at 4:11 PM vignesh C <vignesh21@gmail.com> wrote:

3)
SyncFetchRelationStates:
Earlier the name was FetchTableStates. If we really want to use the
'Sync' keyword, we can name it FetchRelationSyncStates, as we are
fetching sync-status only. Thoughts?

Instead of FetchRelationSyncStates, I preferred FetchRelationStates,
and changed it to FetchRelationStates.

Okay, LGTM.

5)
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+ logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+ MyLogicalRepWorker->dbid,
+ MySubscription->oid,
+ MySubscription->name,
+ MyLogicalRepWorker->userid,
+ InvalidOid,
+ DSM_HANDLE_INVALID);
+ break;
+ }

We set sequencesync_failure_time to 0, but if logicalrep_worker_launch
is not able to launch the worker due to some reason, next time it will
not even wait for 'wal_retrieve_retry_interval time' to attempt
restarting it again. Is that intentional?

In other workflows such as while launching table-sync or apply worker,
this scenario does not arise. This is because we maintain start_time
there which can never be 0 instead of failure time and before
attempting to start the workers, we set start_time to current time.
The seq-sync failure-time OTOH is only set to non-null in
logicalrep_seqsyncworker_failure() and it is not necessary that we
will hit that function as the logicalrep_worker_launch() may fail
before that itself. Do you think we shall maintain start-time instead
of failure-time for seq-sync worker as well? Or is there any other way
to handle it?

I preferred the suggestion from [1]. Modified it accordingly.

Okay, works for me.

The attached v20250709 version patch has the changes for the same.

Thank You for the patches. Please find a few comments:

1)
Shall we update pg_publication doc as well to indicate that pubinsert,
pubupdate, pubdelete , pubtruncate, pubviaroot are meaningful only
when publishing tables. For sequences, these have no meaning.

Since it is clearly mentioned it is for tables, I felt no need to
mention again it is not applicable for sequences.

2)
Shall we have walrcv_disconnect() after copy is done in
LogicalRepSyncSequences()

There is a cleanup function registered for the worker to handle this
at the worker exit. So this is not required.

3)
Do we really need for-loop in ProcessSyncingSequencesForApply? I think
this function is inspired from ProcessSyncingTablesForApply() but
there we need different tablesync workers for different tables. For
sequences, that is not the case and thus for-loop can be omitted. If
we do so, we can amend the comments too where it says " Walk over all
subscription sequences....."

Handled in the previous version

4)
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(

We can drop regress_seq_sub on the publisher now and check for missing
warnings as the next step.

Modified

5)
I am revisiting the test given in [1], I see there is some document change as:

+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link
linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.

But this doc specifically mentions a failover case. It does not
mention the case presented in [1] i.e. if user is trying to use
sequence to populate identity column of a "subscribed" table where the
sequence is also synced originally from publisher, then he may end up
with corrupted state
of IDENTITY column, and thus such cases should be used with caution.
Please review once and see if we need to mention this and the example
too.

In this case, the identity column data—as well as the non-identity
columns—will be sent by the publisher as part of the row data. This
behavior appears consistent with how non-sequence objects are handled
in a publication.
The following documentation note should be sufficient, as it already
clarifies that "it will retain the last value it synchronized from the
publisher":
On the subscriber, a sequence will retain the last value it
synchronized from the publisher. If the subscriber is used as a
read-only database, this should typically not pose a problem.
But if you have something in mind which should be added let me know.

The attached patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250714-0001-Introduce-pg_sequence_state-function-for-e.patchapplication/octet-stream; name=v20250714-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From d845c60c3244f93d8c3745c23bb59ff53ffc471c Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:12:45 +0530
Subject: [PATCH v20250714 1/6] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
In subsequent patches, this function will be used to fetch the
sequence states from the publisher in order to synchronize them on
the subscriber.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index c28aa71f570..97bbc7367e9 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..d051adf4931 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* Open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	sequence_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 1fc19146f46..96779df2941 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250714-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250714-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From acbef37a381c6245cae290d1341fe852d29aa385 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 9 Jun 2025 20:18:54 +0530
Subject: [PATCH v20250714 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 373 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/pg_dump/common.c                    |   4 +-
 src/bin/pg_dump/pg_dump.c                   |   8 +-
 src/bin/pg_dump/pg_dump.h                   |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 17 files changed, 472 insertions(+), 118 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index dc3f9ed3fbf..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1062,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1334,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..ebd5605afe3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index b2d5332effc..43c3d9c2975 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e23b0de7242..bd575237d5d 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating a
+	 * replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +713,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +727,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +742,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +768,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +797,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,18 +836,55 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -835,10 +892,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,16 +922,23 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -880,22 +951,18 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -904,12 +971,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,28 +988,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1051,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1006,10 +1108,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1023,11 +1125,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1393,8 +1497,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1512,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1553,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1572,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1596,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1609,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1773,7 +1893,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2247,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2432,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9ffe2c38f83..3d5ed0e2f08 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10947,11 +10947,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9a4f86b8457..64ce57f57e8 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5144,12 +5144,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5203,7 +5203,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b1a6a08f52b..1ceb25bdcde 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -819,6 +819,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 9bb805f947e..d67a92d8707 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2309,7 +2309,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 96779df2941..2bbcdbb4afa 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12286,6 +12286,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..1af265aa174 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86f95b55419..32841da9dde 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4357,7 +4357,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index dce8c672b40..8b2c407ccdb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2175,6 +2183,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2185,7 +2194,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 529b2241731..14dad19158b 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250714-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250714-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From fd7352f6dc71c31f4a9ccbf0bf73c08cc7fe0df6 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250714 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 231 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..ee98922c237 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e4fd6347fd1..b66ac6eb865 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -664,37 +612,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1331,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1572,77 +1489,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1728,7 +1574,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1746,7 +1592,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index c5fb627aa56..39a53c84e04 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1023,7 +1023,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1145,7 +1145,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1201,7 +1201,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1267,7 +1267,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1402,7 +1402,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2244,7 +2244,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3721,7 +3721,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4798,7 +4798,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..663b87a9c80 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 11454bf8a57..67de8beeaf2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2911,7 +2911,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250714-0005-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250714-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 1fa69067c1175f985710dfd8987b119f4ecb1351 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 9 Jul 2025 14:58:11 +0530
Subject: [PATCH v20250714 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  62 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 631 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  70 +-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 24 files changed, 1164 insertions(+), 110 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index ebd5605afe3..b8f415cd50d 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -337,7 +337,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 43c3d9c2975..485f6be15b7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1404,6 +1404,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index d051adf4931..4d03704f39b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index bd575237d5d..fb410c5e503 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1066,7 +1066,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -1870,7 +1870,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 4aed0dfcebb..5df81cbec82 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -239,19 +239,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -261,7 +260,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -321,6 +320,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -406,7 +406,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -492,8 +493,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -616,13 +625,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -689,7 +698,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -819,6 +828,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -867,7 +898,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1182,7 +1213,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1329,7 +1360,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1369,6 +1400,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..aad9bee03d7
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,631 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+List	   *sequence_states_not_ready = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	/* No sequences to sync, so nothing to do */
+	if (list_length(sequence_states_not_ready) == 0)
+		return;
+	
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+											     InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+	/* Now safe to release the LWLock */
+	LWLockRelease(LogicalRepWorkerLock);
+
+	/*
+	 * If there is a free sync worker slot, start a new sequencesync
+	 * worker, and break from the loop.
+	 */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		/*
+		 * To prevent starting the sequencesync worker at a high frequency
+		 * after a failure, we store its last failure time. We start the
+		 * sequencesync worker again after waiting at least
+		 * wal_retrieve_retry_interval.
+		 */
+		if (!MyLogicalRepWorker->sequencesync_failure_time ||
+			TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+									   now, wal_retrieve_retry_interval))
+		{
+			MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+			if (!logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											InvalidOid,
+											DSM_HANDLE_INVALID))
+				MyLogicalRepWorker->sequencesync_failure_time = now;
+		}
+	}
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *subsequences;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	int			seq_count;
+	Oid			subid = MyLogicalRepWorker->subid;
+	MemoryContext oldctx;
+	StringInfoData app_name;
+	List	   *sequences_to_copy = NIL;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	subsequences = GetSubscriptionRelations(subid, false, true, true);
+	seq_count = list_length(subsequences);
+
+	foreach_ptr(SubscriptionRelState, subseq, subsequences)
+	{
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(subseq->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subseq->relid;
+		seq_info->remote_seq_fetched = false;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	list_free_deep(sequences_to_copy);
+
+	if (!run_as_owner && seq_count)
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..8914f5cca10 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,19 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates();
+
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -121,17 +146,22 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 /*
  * Common code to fetch the up-to-date sync state info into the static lists.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we update both table_states_not_ready and sequence_states_not_ready
+ * simultaneously to ensure consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates()
 {
+	/*
+	 * has_subtables is declared as static, since the same value can be used
+	 * until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -144,12 +174,14 @@ FetchRelationStates(bool *started_tx)
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
+		list_free_deep(sequence_states_not_ready);
 		table_states_not_ready = NIL;
+		sequence_states_not_ready = NIL;
 
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +194,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				sequence_states_not_ready = lappend(sequence_states_not_ready, rstate);
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +223,11 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b66ac6eb865..6cf910c6ea6 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1248,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1521,7 +1521,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1566,7 +1567,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1574,7 +1575,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1588,23 +1589,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates();
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 39a53c84e04..c49b025f16a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -482,6 +482,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1022,7 +1027,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1144,7 +1152,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1200,7 +1211,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1266,7 +1280,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1401,7 +1418,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2243,7 +2263,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3720,7 +3743,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -4529,7 +4555,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4649,8 +4676,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4729,6 +4756,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4748,14 +4779,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4800,6 +4834,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  InvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4826,6 +4863,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4838,9 +4879,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index a925be86944..a7ccfaa8bd9 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 2bbcdbb4afa..3bae4e6dc11 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5696,9 +5696,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 663b87a9c80..2373968ff0d 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -238,9 +241,11 @@ extern PGDLLIMPORT bool in_remote_transaction;
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
 extern PGDLLIMPORT List *table_states_not_ready;
+extern PGDLLIMPORT List *sequence_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +253,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,11 +271,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(void);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -332,15 +341,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b8a89275f13
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 67de8beeaf2..59c537393c8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1630,6 +1630,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250714-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250714-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From f825d81f6c8bb9056bc9860686b8fe37f17a8799 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:14:18 +0530
Subject: [PATCH v20250714 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.
---
 src/backend/catalog/pg_publication.c      |   4 +-
 src/backend/commands/publicationcmds.c    | 134 ++++--
 src/backend/parser/gram.y                 |  38 +-
 src/bin/pg_dump/pg_dump.c                 |  19 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  19 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 558 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 707 insertions(+), 350 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..dc3f9ed3fbf 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1084,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..572ec057ad8 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -72,6 +72,8 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 								  AlterPublicationStmt *stmt);
 static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
 static char defGetGeneratedColsOption(DefElem *def);
+static void process_all_objtype_list(List *all_objects_list, bool *all_tables,
+									 bool *all_sequences);
 
 
 static void
@@ -820,6 +822,41 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 	}
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pubobjtype has been specified more than once.
+ */
+static void
+process_all_objtype_list(List *all_objects_list, bool *all_tables,
+						 bool *all_sequences)
+{
+	Assert(all_objects_list);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Create new publication.
  */
@@ -841,6 +878,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	AclResult	aclresult;
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
+	bool		all_tables = false;
+	bool		all_sequences = false;
 
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
@@ -848,11 +887,22 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (stmt->puballobj)
+		process_all_objtype_list(stmt->pubobjects,
+								 &all_tables,
+								 &all_sequences);
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -884,8 +934,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
-	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballtables - 1] = all_tables;
+	values[Anum_pg_publication_puballsequences - 1] = all_sequences;
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -913,12 +963,12 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (all_tables)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1440,6 +1490,8 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
+
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1504,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -1902,6 +1961,13 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
 					errcode(ERRCODE_SYNTAX_ERROR),
 					errmsg("column list must not be specified in ALTER PUBLICATION ... DROP"));
 
+		if (RelationGetForm(rel)->relkind == RELKIND_SEQUENCE)
+			ereport(ERROR,
+					errcode(ERRCODE_UNDEFINED_OBJECT),
+					errmsg("relation \"%s\" is not part of the publication",
+						   RelationGetRelationName(rel)),
+					errdetail_relkind_not_supported(RelationGetForm(rel)->relkind));
+
 		prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
 							   ObjectIdGetDatum(relid),
 							   ObjectIdGetDatum(pubid));
@@ -2019,19 +2085,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 70a0d832a11..9ffe2c38f83 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -259,6 +259,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +585,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10670,7 +10672,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10690,13 +10697,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					n->pubobjects = (List *) $6;
+					n->puballobj = true;
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10808,6 +10816,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 1937997ea67..9a4f86b8457 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4392,6 +4392,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4422,9 +4423,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4440,6 +4446,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4460,6 +4467,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4511,8 +4520,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 39eef1d6617..b1a6a08f52b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -662,6 +662,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 2485d8f360e..b3bfbdc82bc 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3302,6 +3302,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index dd25d2fe7b8..3035b24f26f 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 5ba45a0bcb3..9bb805f947e 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3559,12 +3559,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 28e2e8dc0fd..86f95b55419 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4290,13 +4290,30 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		puballobj;		/* True if the publication is for all tables,
+								 * all sequences, or both */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3a2eacd793f..749c316a107 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,92 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +320,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +338,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +370,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +386,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +405,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +416,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +452,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +465,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +583,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +878,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1071,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1282,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1325,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1340,10 +1408,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1421,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1450,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1476,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1547,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1558,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1579,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1591,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1603,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1614,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1636,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1667,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1679,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1761,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1782,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1917,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c9e309190df..1cf013e72f6 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 83192038571..11454bf8a57 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2349,6 +2349,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250714-0006-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250714-0006-Documentation-for-sequence-synchronization.patchDownload
From dd67b4a4066cce39291bebfdd713203644001e7a Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250714 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 260 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  76 +++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 461 insertions(+), 61 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index aa5b8772436..e1f9af82a46 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8149,16 +8149,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8192,7 +8195,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8201,12 +8204,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c7acc0f182f..81c7afad3c8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5188,9 +5188,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5331,8 +5331,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5355,10 +5355,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index e26f7f59d4a..edfc66465a7 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,220 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher. An ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are created in the publisher.
+    See also <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, either use
+    <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>
+    to recreate the missing sequence on the publisher, or, if the sequence are
+    no longer required, execute <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+    to remove the stale sequence entries from synchronization in the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2072,16 +2290,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2399,8 +2620,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2413,8 +2634,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 4265a22d4de..8fa27144da8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..0969a2d7075 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,20 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | ALL <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +126,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +157,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +301,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +320,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +472,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#263shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#262)
Re: Logical Replication of sequences

On Mon, Jul 14, 2025 at 10:03 AM vignesh C <vignesh21@gmail.com> wrote:

On Fri, 11 Jul 2025 at 14:26, shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Jul 9, 2025 at 4:11 PM vignesh C <vignesh21@gmail.com> wrote:

3)
SyncFetchRelationStates:
Earlier the name was FetchTableStates. If we really want to use the
'Sync' keyword, we can name it FetchRelationSyncStates, as we are
fetching sync-status only. Thoughts?

Instead of FetchRelationSyncStates, I preferred FetchRelationStates,
and changed it to FetchRelationStates.

Okay, LGTM.

5)
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+ logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+ MyLogicalRepWorker->dbid,
+ MySubscription->oid,
+ MySubscription->name,
+ MyLogicalRepWorker->userid,
+ InvalidOid,
+ DSM_HANDLE_INVALID);
+ break;
+ }

We set sequencesync_failure_time to 0, but if logicalrep_worker_launch
is not able to launch the worker due to some reason, next time it will
not even wait for 'wal_retrieve_retry_interval time' to attempt
restarting it again. Is that intentional?

In other workflows such as while launching table-sync or apply worker,
this scenario does not arise. This is because we maintain start_time
there which can never be 0 instead of failure time and before
attempting to start the workers, we set start_time to current time.
The seq-sync failure-time OTOH is only set to non-null in
logicalrep_seqsyncworker_failure() and it is not necessary that we
will hit that function as the logicalrep_worker_launch() may fail
before that itself. Do you think we shall maintain start-time instead
of failure-time for seq-sync worker as well? Or is there any other way
to handle it?

I preferred the suggestion from [1]. Modified it accordingly.

Okay, works for me.

The attached v20250709 version patch has the changes for the same.

Thank You for the patches. Please find a few comments:

1)
Shall we update pg_publication doc as well to indicate that pubinsert,
pubupdate, pubdelete , pubtruncate, pubviaroot are meaningful only
when publishing tables. For sequences, these have no meaning.

Since it is clearly mentioned it is for tables, I felt no need to
mention again it is not applicable for sequences.

2)
Shall we have walrcv_disconnect() after copy is done in
LogicalRepSyncSequences()

There is a cleanup function registered for the worker to handle this
at the worker exit. So this is not required.

3)
Do we really need for-loop in ProcessSyncingSequencesForApply? I think
this function is inspired from ProcessSyncingTablesForApply() but
there we need different tablesync workers for different tables. For
sequences, that is not the case and thus for-loop can be omitted. If
we do so, we can amend the comments too where it says " Walk over all
subscription sequences....."

Handled in the previous version

4)
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(

We can drop regress_seq_sub on the publisher now and check for missing
warnings as the next step.

Modified

5)
I am revisiting the test given in [1], I see there is some document change as:

+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link
linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.

But this doc specifically mentions a failover case. It does not
mention the case presented in [1] i.e. if user is trying to use
sequence to populate identity column of a "subscribed" table where the
sequence is also synced originally from publisher, then he may end up
with corrupted state
of IDENTITY column, and thus such cases should be used with caution.
Please review once and see if we need to mention this and the example
too.

In this case, the identity column data—as well as the non-identity
columns—will be sent by the publisher as part of the row data. This
behavior appears consistent with how non-sequence objects are handled
in a publication.
The following documentation note should be sufficient, as it already
clarifies that "it will retain the last value it synchronized from the
publisher":

Not very sure about it, but let me think more.

On the subscriber, a sequence will retain the last value it
synchronized from the publisher. If the subscriber is used as a
read-only database, this should typically not pose a problem.
But if you have something in mind which should be added let me know.

Will let you know if I come up with something better to add here.

The attached patch has the changes for the same.

Thank You. Few comments:

1)
patch 005 has trailing whitespaces issue.

2)
In LogicalRepSyncSequences(), do we really need this:

+ seq_count = list_length(subsequences);;

seq_count is only used at the end to figure out if we really had some
sequences. We can simply check subsequences against NIL for that
purpose. If we really want to use list_length as a check, then we
shall move it at the end where we use it.

3)
LogicalRepSyncSequences():
+ MemoryContext oldctx;

we can move this to a for-loop where it is being used.

4)
The only usage of sequence_states_not_ready is this now:

+ /* No sequences to sync, so nothing to do */
+ if (list_length(sequence_states_not_ready) == 0)
+ return;

Now, do we need to have it as a List?

5)
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher. An ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are created in the publisher.
+    See also <link
linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, either use
+    <link linkend="sql-createsequence"><command>CREATE
SEQUENCE</command></link>
+    to recreate the missing sequence on the publisher, or, if the sequence are
+    no longer required, execute <link
linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+    to remove the stale sequence entries from synchronization in the
subscriber.
+   </para>
+  </sect2>
+

Please see if this looks appropriate, I have added drop-sequence
option as well and corrected few trivial things:

During sequence synchronization, if a sequence is dropped on the
publisher, an ERROR is logged listing the missing sequences before the
process exits. The apply worker detects this failure and repeatedly
respawns the sequence synchronization worker to continue the
synchronization process until the sequences are either recreated on
the publisher, dropped on the subscriber, or removed from the
synchronization list.

To resolve this issue, either recreate the missing sequence on the
publisher using CREATE SEQUENCE, drop the sequences on the subscriber
if they are no longer needed using DROP SEQUENCE, or run ALTER
SUBSCRIPTION ... REFRESH PUBLICATION to remove these sequences from
synchronization on the subscriber.

thanks
Shveta

#264Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#262)
Re: Logical Replication of sequences

On Mon, Jul 14, 2025 at 10:03 AM vignesh C <vignesh21@gmail.com> wrote:

On Fri, 11 Jul 2025 at 14:26, shveta malik <shveta.malik@gmail.com> wrote:

I have picked this up again for final review, started with 0001, I
think mostly 0001 looks good to me, except few comments

1.
+ lsn = PageGetLSN(page);
+ last_value = seq->last_value;
+ log_cnt = seq->log_cnt;
+ is_called = seq->is_called;
+
+ UnlockReleaseBuffer(buf);
+ sequence_close(seqrel, NoLock);
+
+ /* Page LSN for the sequence */
+ values[0] = LSNGetDatum(lsn);
+
+ /* The value most recently returned by nextval in the current session */
+ values[1] = Int64GetDatum(last_value);
+

I think we can avoid using extra variables like lsn, last_value etc
instead we can directly copy into the value[$] as shown below.

values[0] = LSNGetDatum(PageGetLSN(page));
values[1] = Int64GetDatum(seq->last_value);
...
UnlockReleaseBuffer(buf);
sequence_close(seqrel, NoLock);

2.
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.

Shall we change 'is the page LSN of the sequence' to 'is the page LSN
of the sequence relation'

And I think this field doesn't seem to be very relevant for the user,
although we are exposing it because we need it for internal use.
Maybe at least the commit message of this patch should give some
details on why we need to expose this field.

--
Regards,
Dilip Kumar
Google

#265vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#263)
6 attachment(s)
Re: Logical Replication of sequences

On Tue, 15 Jul 2025 at 11:27, shveta malik <shveta.malik@gmail.com> wrote:

Thank You. Few comments:

1)
patch 005 has trailing whitespaces issue.

Fixed them

2)
In LogicalRepSyncSequences(), do we really need this:

+ seq_count = list_length(subsequences);;

seq_count is only used at the end to figure out if we really had some
sequences. We can simply check subsequences against NIL for that
purpose. If we really want to use list_length as a check, then we
shall move it at the end where we use it.

Modified

3)
LogicalRepSyncSequences():
+ MemoryContext oldctx;

we can move this to a for-loop where it is being used.

Modified

4)
The only usage of sequence_states_not_ready is this now:

+ /* No sequences to sync, so nothing to do */
+ if (list_length(sequence_states_not_ready) == 0)
+ return;

Now, do we need to have it as a List?

Removed this list variable and used a output function argument in
FetchRelationStates

5)
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher. An ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are created in the publisher.
+    See also <link
linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, either use
+    <link linkend="sql-createsequence"><command>CREATE
SEQUENCE</command></link>
+    to recreate the missing sequence on the publisher, or, if the sequence are
+    no longer required, execute <link
linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+    to remove the stale sequence entries from synchronization in the
subscriber.
+   </para>
+  </sect2>
+

Please see if this looks appropriate, I have added drop-sequence
option as well and corrected few trivial things:

During sequence synchronization, if a sequence is dropped on the
publisher, an ERROR is logged listing the missing sequences before the
process exits. The apply worker detects this failure and repeatedly
respawns the sequence synchronization worker to continue the
synchronization process until the sequences are either recreated on
the publisher, dropped on the subscriber, or removed from the
synchronization list.

To resolve this issue, either recreate the missing sequence on the
publisher using CREATE SEQUENCE, drop the sequences on the subscriber
if they are no longer needed using DROP SEQUENCE, or run ALTER
SUBSCRIPTION ... REFRESH PUBLICATION to remove these sequences from
synchronization on the subscriber.

Modified

The attached v20250716 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250716-0001-Introduce-pg_sequence_state-function-for-e.patchapplication/octet-stream; name=v20250716-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 164c35ff0a74a457d3c37f791cc428805186cd3c Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:12:45 +0530
Subject: [PATCH v20250716 1/6] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.
In subsequent patches, this function will be used to fetch the
sequence states from the publisher in order to synchronize them on
the subscriber.
---
 doc/src/sgml/func.sgml                 | 27 +++++++++
 src/backend/commands/sequence.c        | 80 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 ++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 122 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index f5a0e0954a1..4fe158002a9 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..d051adf4931 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,85 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	XLogRecPtr	lsn;
+	int64		last_value;
+	int64		log_cnt;
+	bool		is_called;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* Open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	lsn = PageGetLSN(page);
+	last_value = seq->last_value;
+	log_cnt = seq->log_cnt;
+	is_called = seq->is_called;
+
+	UnlockReleaseBuffer(buf);
+	sequence_close(seqrel, NoLock);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(lsn);
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(is_called);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 1fc19146f46..96779df2941 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250716-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250716-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 4dd3091f0bfd81177152aa0d07cf4e9d73698a0f Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 9 Jun 2025 20:18:54 +0530
Subject: [PATCH v20250716 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 373 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/pg_dump/common.c                    |   4 +-
 src/bin/pg_dump/pg_dump.c                   |   8 +-
 src/bin/pg_dump/pg_dump.h                   |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 17 files changed, 472 insertions(+), 118 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index dc3f9ed3fbf..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1062,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1334,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..ebd5605afe3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index b2d5332effc..43c3d9c2975 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e23b0de7242..bd575237d5d 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating a
+	 * replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +713,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +727,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +742,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +768,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +797,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,18 +836,55 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -835,10 +892,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,16 +922,23 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -880,22 +951,18 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -904,12 +971,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,28 +988,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1051,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1006,10 +1108,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1023,11 +1125,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1393,8 +1497,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1512,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1553,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1572,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1596,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1609,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1773,7 +1893,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2247,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2432,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a6dfd92f313..7a272027e58 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10954,11 +10954,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9a4f86b8457..64ce57f57e8 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5144,12 +5144,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5203,7 +5203,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b1a6a08f52b..1ceb25bdcde 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -819,6 +819,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 4ab0c2edb95..f4f8dec6710 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2309,7 +2309,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 96779df2941..2bbcdbb4afa 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12286,6 +12286,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..1af265aa174 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 27e23d972fc..17b03a9527f 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4358,7 +4358,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index dce8c672b40..8b2c407ccdb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2175,6 +2183,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2185,7 +2194,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 529b2241731..14dad19158b 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250716-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250716-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 20df7f0185d6a900f4d7c33f501c46f095886009 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250716 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 231 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..ee98922c237 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e4fd6347fd1..b66ac6eb865 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -664,37 +612,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1331,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1572,77 +1489,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1728,7 +1574,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1746,7 +1592,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index c5fb627aa56..39a53c84e04 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1023,7 +1023,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1145,7 +1145,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1201,7 +1201,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1267,7 +1267,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1402,7 +1402,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2244,7 +2244,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3721,7 +3721,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4798,7 +4798,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..663b87a9c80 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8cd57e276f2..673a34869aa 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2911,7 +2911,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250716-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250716-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 73f0a65c268147d77fc588c81fd31b9577d5a1ff Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:14:18 +0530
Subject: [PATCH v20250716 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.
---
 src/backend/catalog/pg_publication.c      |   4 +-
 src/backend/commands/publicationcmds.c    | 134 ++++--
 src/backend/parser/gram.y                 |  38 +-
 src/bin/pg_dump/pg_dump.c                 |  19 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  19 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 558 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 707 insertions(+), 350 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..dc3f9ed3fbf 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1084,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..572ec057ad8 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -72,6 +72,8 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 								  AlterPublicationStmt *stmt);
 static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
 static char defGetGeneratedColsOption(DefElem *def);
+static void process_all_objtype_list(List *all_objects_list, bool *all_tables,
+									 bool *all_sequences);
 
 
 static void
@@ -820,6 +822,41 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 	}
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pubobjtype has been specified more than once.
+ */
+static void
+process_all_objtype_list(List *all_objects_list, bool *all_tables,
+						 bool *all_sequences)
+{
+	Assert(all_objects_list);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Create new publication.
  */
@@ -841,6 +878,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	AclResult	aclresult;
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
+	bool		all_tables = false;
+	bool		all_sequences = false;
 
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
@@ -848,11 +887,22 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (stmt->puballobj)
+		process_all_objtype_list(stmt->pubobjects,
+								 &all_tables,
+								 &all_sequences);
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -884,8 +934,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
-	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballtables - 1] = all_tables;
+	values[Anum_pg_publication_puballsequences - 1] = all_sequences;
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -913,12 +963,12 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (all_tables)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1440,6 +1490,8 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
+
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1504,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -1902,6 +1961,13 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
 					errcode(ERRCODE_SYNTAX_ERROR),
 					errmsg("column list must not be specified in ALTER PUBLICATION ... DROP"));
 
+		if (RelationGetForm(rel)->relkind == RELKIND_SEQUENCE)
+			ereport(ERROR,
+					errcode(ERRCODE_UNDEFINED_OBJECT),
+					errmsg("relation \"%s\" is not part of the publication",
+						   RelationGetRelationName(rel)),
+					errdetail_relkind_not_supported(RelationGetForm(rel)->relkind));
+
 		prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
 							   ObjectIdGetDatum(relid),
 							   ObjectIdGetDatum(pubid));
@@ -2019,19 +2085,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 73345bb3c70..a6dfd92f313 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -259,6 +259,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +585,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10677,7 +10679,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10697,13 +10704,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					n->pubobjects = (List *) $6;
+					n->puballobj = true;
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10815,6 +10823,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 1937997ea67..9a4f86b8457 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4392,6 +4392,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4422,9 +4423,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4440,6 +4446,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4460,6 +4467,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4511,8 +4520,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 39eef1d6617..b1a6a08f52b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -662,6 +662,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 2485d8f360e..b3bfbdc82bc 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3302,6 +3302,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index dd25d2fe7b8..3035b24f26f 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 37524364290..4ab0c2edb95 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3575,12 +3575,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..27e23d972fc 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,13 +4291,30 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		puballobj;		/* True if the publication is for all tables,
+								 * all sequences, or both */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3a2eacd793f..749c316a107 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,92 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +320,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +338,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +370,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +386,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +405,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +416,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +452,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +465,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +583,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +878,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1071,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1282,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1325,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1340,10 +1408,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1421,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1450,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1476,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1547,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1558,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1579,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1591,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1603,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1614,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1636,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1667,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1679,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1761,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1782,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1917,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c9e309190df..1cf013e72f6 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ff050e93a50..8cd57e276f2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2349,6 +2349,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250716-0005-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250716-0005-New-worker-for-sequence-synchronization-du.patchDownload
From fd4fa8b17b0b35e77ea509564114337b6d3772ba Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 9 Jul 2025 14:58:11 +0530
Subject: [PATCH v20250716 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  62 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 623 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  80 ++-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  28 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 24 files changed, 1164 insertions(+), 111 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index ebd5605afe3..b8f415cd50d 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -337,7 +337,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 43c3d9c2975..485f6be15b7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1404,6 +1404,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index d051adf4931..4d03704f39b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index bd575237d5d..fb410c5e503 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1066,7 +1066,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -1870,7 +1870,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 4aed0dfcebb..5df81cbec82 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -239,19 +239,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -261,7 +260,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -321,6 +320,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -406,7 +406,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -492,8 +493,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -616,13 +625,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -689,7 +698,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -819,6 +828,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -867,7 +898,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1182,7 +1213,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1329,7 +1360,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1369,6 +1400,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..19984cbfc2c
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,623 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+	/* Now safe to release the LWLock */
+	LWLockRelease(LogicalRepWorkerLock);
+
+	/*
+	 * If there is a free sync worker slot, start a new sequencesync worker,
+	 * and break from the loop.
+	 */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		/*
+		 * To prevent starting the sequencesync worker at a high frequency
+		 * after a failure, we store its last failure time. We start the
+		 * sequencesync worker again after waiting at least
+		 * wal_retrieve_retry_interval.
+		 */
+		if (!MyLogicalRepWorker->sequencesync_failure_time ||
+			TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+									   now, wal_retrieve_retry_interval))
+		{
+			MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+			if (!logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										  MyLogicalRepWorker->dbid,
+										  MySubscription->oid,
+										  MySubscription->name,
+										  MyLogicalRepWorker->userid,
+										  InvalidOid,
+										  DSM_HANDLE_INVALID))
+				MyLogicalRepWorker->sequencesync_failure_time = now;
+		}
+	}
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *subsequences;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	List	   *sequences_to_copy = NIL;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	subsequences = GetSubscriptionRelations(subid, false, true, true);
+
+	foreach_ptr(SubscriptionRelState, subseq, subsequences)
+	{
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(subseq->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subseq->relid;
+		seq_info->remote_seq_fetched = false;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	list_free_deep(sequences_to_copy);
+
+	if (!run_as_owner && (subsequences != NIL))
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..17e37f2feee 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+		{
+			bool	has_pending_sequences = false;
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates(&has_pending_sequences);
 			ProcessSyncingTablesForApply(current_lsn);
+			if (has_pending_sequences)
+				ProcessSyncingSequencesForApply();
+
+			break;
+		}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +149,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the same
+	 * value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +177,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +186,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +199,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +228,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b66ac6eb865..5a8420f2ac7 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1248,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1521,7 +1521,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1566,7 +1567,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1574,7 +1575,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1588,23 +1589,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 39a53c84e04..c49b025f16a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -482,6 +482,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1022,7 +1027,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1144,7 +1152,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1200,7 +1211,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1266,7 +1280,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1401,7 +1418,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2243,7 +2263,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3720,7 +3743,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -4529,7 +4555,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4649,8 +4676,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4729,6 +4756,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4748,14 +4779,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4800,6 +4834,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  InvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4826,6 +4863,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4838,9 +4879,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..80181825240 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 2bbcdbb4afa..3bae4e6dc11 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5696,9 +5696,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 663b87a9c80..58767a85179 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -241,6 +244,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +252,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,11 +270,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -332,15 +340,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b8a89275f13
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 673a34869aa..b38c684b3d7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250716-0006-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250716-0006-Documentation-for-sequence-synchronization.patchDownload
From 65fae5649d49396b7df7e5d44f262d4218e304e0 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250716 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  76 +++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 464 insertions(+), 61 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index aa5b8772436..e1f9af82a46 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8149,16 +8149,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8192,7 +8195,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8201,12 +8204,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c7acc0f182f..81c7afad3c8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5188,9 +5188,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5331,8 +5331,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5355,10 +5355,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index e26f7f59d4a..7cf90be2deb 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,223 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, an ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are either recreated on
+    the publisher, dropped on the subscriber, or removed from the
+    synchronization list. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this issue, either recreate the missing sequence on the
+    publisher using <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>,
+    drop the sequences on the subscriber if they are no longer needed using
+    <link linkend="sql-dropsequence"><command>DROP SEQUENCE</command></link>,
+    or run <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+    remove these sequences from synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2072,16 +2293,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2399,8 +2623,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2413,8 +2637,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 823afe1b30b..a1a2be86d38 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..0969a2d7075 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,20 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | ALL <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +126,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +157,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +301,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +320,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +472,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#266vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#264)
6 attachment(s)
Re: Logical Replication of sequences

On Wed, 16 Jul 2025 at 11:15, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Jul 14, 2025 at 10:03 AM vignesh C <vignesh21@gmail.com> wrote:

On Fri, 11 Jul 2025 at 14:26, shveta malik <shveta.malik@gmail.com> wrote:

I have picked this up again for final review, started with 0001, I
think mostly 0001 looks good to me, except few comments

1.
+ lsn = PageGetLSN(page);
+ last_value = seq->last_value;
+ log_cnt = seq->log_cnt;
+ is_called = seq->is_called;
+
+ UnlockReleaseBuffer(buf);
+ sequence_close(seqrel, NoLock);
+
+ /* Page LSN for the sequence */
+ values[0] = LSNGetDatum(lsn);
+
+ /* The value most recently returned by nextval in the current session */
+ values[1] = Int64GetDatum(last_value);
+

I think we can avoid using extra variables like lsn, last_value etc
instead we can directly copy into the value[$] as shown below.

values[0] = LSNGetDatum(PageGetLSN(page));
values[1] = Int64GetDatum(seq->last_value);
...
UnlockReleaseBuffer(buf);
sequence_close(seqrel, NoLock);

Modified

2.
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.

Shall we change 'is the page LSN of the sequence' to 'is the page LSN
of the sequence relation'

Modified

And I think this field doesn't seem to be very relevant for the user,
although we are exposing it because we need it for internal use.
Maybe at least the commit message of this patch should give some
details on why we need to expose this field.

Updated commit message

The attached v20250717 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250717-0001-Introduce-pg_sequence_state-function-for-e.patchtext/x-patch; charset=US-ASCII; name=v20250717-0001-Introduce-pg_sequence_state-function-for-e.patchDownload
From 2001eacd1048c0af068f6258bf3912346562dd33 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:12:45 +0530
Subject: [PATCH v20250717 1/6] Introduce pg_sequence_state function for
 enhanced sequence management

This patch introduces a new function, 'pg_sequence_state', which
allows retrieval of sequence values, including the associated LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.
---
 doc/src/sgml/func.sgml                 | 27 ++++++++++
 src/backend/commands/sequence.c        | 70 ++++++++++++++++++++++++++
 src/include/catalog/pg_proc.dat        |  8 +++
 src/test/regress/expected/sequence.out |  6 +++
 src/test/regress/sql/sequence.sql      |  1 +
 5 files changed, 112 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index f5a0e0954a1..1c22f8278f2 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_sequence_state</primary>
+        </indexterm>
+        <function>pg_sequence_state</function> ( <parameter>schema_name</parameter> <type>text</type>,
+        <parameter>sequence_name</parameter> <type>text</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>page_lsn</parameter> <type>pg_lsn</type>,
+        <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence relation, <literal>last_value</literal> is
+        the current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..1f96d445f2d 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1885,6 +1886,75 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 		PG_RETURN_NULL();
 }
 
+/*
+ * Return the current on-disk state of the sequence.
+ *
+ * Note: This is roughly equivalent to selecting the data from the sequence,
+ * except that it also returns the page LSN.
+ */
+Datum
+pg_sequence_state(PG_FUNCTION_ARGS)
+{
+	char	   *schema_name = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	char	   *sequence_name = text_to_cstring(PG_GETARG_TEXT_PP(1));
+	Oid			seq_relid;
+	SeqTable	elm;
+	Relation	seqrel;
+	Buffer		buf;
+	Page		page;
+	HeapTupleData seqtuple;
+	Form_pg_sequence_data seq;
+	Datum		result;
+
+	TupleDesc	tupdesc;
+	HeapTuple	tuple;
+	Datum		values[4];
+	bool		nulls[4] = {0};
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	seq_relid = RangeVarGetRelid(makeRangeVar(schema_name, sequence_name, -1),
+								 NoLock, true);
+	if (!OidIsValid(seq_relid))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("sequence \"%s.%s\" does not exist",
+					   schema_name, sequence_name));
+
+	/* Open and lock sequence */
+	init_sequence(seq_relid, &elm, &seqrel);
+
+	if (pg_class_aclcheck(elm->relid, GetUserId(),
+						  ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
+		ereport(ERROR,
+				errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				errmsg("permission denied for sequence %s",
+					   RelationGetRelationName(seqrel)));
+
+	seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+	page = BufferGetPage(buf);
+
+	/* Page LSN for the sequence */
+	values[0] = LSNGetDatum(PageGetLSN(page));
+
+	/* The value most recently returned by nextval in the current session */
+	values[1] = Int64GetDatum(seq->last_value);
+
+	/* How many fetches remain before a new WAL record must be written */
+	values[2] = Int64GetDatum(seq->log_cnt);
+
+	/* Indicates whether the sequence has been used */
+	values[3] = BoolGetDatum(seq->is_called);
+
+	UnlockReleaseBuffer(buf);
+	sequence_close(seqrel, NoLock);
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
 
 void
 seq_redo(XLogReaderState *record)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 1fc19146f46..96779df2941 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,6 +3433,14 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
+{ oid => '8051',
+  descr => 'current on-disk sequence state',
+  proname => 'pg_sequence_state', provolatile => 'v',
+  prorettype => 'record', proargtypes => 'text text',
+  proallargtypes => '{text,text,pg_lsn,int8,int8,bool}',
+  proargmodes => '{i,i,o,o,o,o}',
+  proargnames => '{schema_name,sequence_name,page_lsn,last_value,log_cnt,is_called}',
+  prosrc => 'pg_sequence_state' },
 { oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..bc22e72a059 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -161,6 +161,12 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 NOTICE:  relation "sequence_test" already exists, skipping
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
+ last_value | log_cnt | is_called 
+------------+---------+-----------
+          1 |       0 | f
+(1 row)
+
 SELECT nextval('sequence_test'::text);
  nextval 
 ---------
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..e8fd0d3c9fe 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -112,6 +112,7 @@ SELECT nextval('serialTest2_f6_seq');
 CREATE SEQUENCE sequence_test;
 CREATE SEQUENCE IF NOT EXISTS sequence_test;
 
+SELECT last_value, log_cnt, is_called  FROM  pg_sequence_state('public', 'sequence_test');
 SELECT nextval('sequence_test'::text);
 SELECT nextval('sequence_test'::regclass);
 SELECT currval('sequence_test'::text);
-- 
2.43.0

v20250717-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250717-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 5ad8bcc3ce6222cabc29c3a6bd4f429c02bb58c0 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250717 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 10 files changed, 231 insertions(+), 187 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..ee98922c237 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e4fd6347fd1..b66ac6eb865 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -664,37 +612,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1331,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1572,77 +1489,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1728,7 +1574,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1746,7 +1592,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index c5fb627aa56..39a53c84e04 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1023,7 +1023,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1145,7 +1145,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1201,7 +1201,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1267,7 +1267,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1402,7 +1402,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2244,7 +2244,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3721,7 +3721,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4798,7 +4798,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..663b87a9c80 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8cd57e276f2..673a34869aa 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2911,7 +2911,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250717-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250717-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 971570e3575c6f7e71fe9626c7d44594b7086584 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 9 Jun 2025 20:18:54 +0530
Subject: [PATCH v20250717 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 373 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/pg_dump/common.c                    |   4 +-
 src/bin/pg_dump/pg_dump.c                   |   8 +-
 src/bin/pg_dump/pg_dump.h                   |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 17 files changed, 472 insertions(+), 118 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index dc3f9ed3fbf..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1062,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1334,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..ebd5605afe3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index b2d5332effc..43c3d9c2975 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e23b0de7242..bd575237d5d 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating a
+	 * replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +713,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +727,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +742,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +768,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +797,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,18 +836,55 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -835,10 +892,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,16 +922,23 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -880,22 +951,18 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -904,12 +971,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,28 +988,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1051,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1006,10 +1108,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1023,11 +1125,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1393,8 +1497,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1512,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1553,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1572,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1596,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1609,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1773,7 +1893,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2247,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2432,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 53ddd25c42d..3dfa086faa8 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a6dfd92f313..7a272027e58 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10954,11 +10954,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index cc22fda1858..31d479f98a1 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5144,12 +5144,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5203,7 +5203,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b1a6a08f52b..1ceb25bdcde 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -819,6 +819,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 4ab0c2edb95..f4f8dec6710 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2309,7 +2309,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 96779df2941..2bbcdbb4afa 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12286,6 +12286,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..1af265aa174 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 27e23d972fc..17b03a9527f 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4358,7 +4358,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index dce8c672b40..8b2c407ccdb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2175,6 +2183,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2185,7 +2194,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 529b2241731..14dad19158b 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250717-0005-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250717-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 826ac4aa5bb73a906a8c23b8c63c66ad5f609e8f Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 9 Jul 2025 14:58:11 +0530
Subject: [PATCH v20250717 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  26 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  62 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 623 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   |  80 ++-
 src/backend/replication/logical/tablesync.c   |  48 +-
 src/backend/replication/logical/worker.c      |  73 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  28 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 24 files changed, 1164 insertions(+), 111 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index ebd5605afe3..b8f415cd50d 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -337,7 +337,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 43c3d9c2975..485f6be15b7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1404,6 +1404,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 1f96d445f2d..7d89cadb173 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1889,6 +1891,10 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 /*
  * Return the current on-disk state of the sequence.
  *
+ * The page LSN will be used in logical replication of sequences to record the
+ * LSN of the sequence page in the pg_subscription_rel system catalog.  It
+ * reflects the LSN of the remote sequence at the time it was synchronized.
+ *
  * Note: This is roughly equivalent to selecting the data from the sequence,
  * except that it also returns the page LSN.
  */
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index bd575237d5d..fb410c5e503 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1066,7 +1066,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -1870,7 +1870,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 4aed0dfcebb..5df81cbec82 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -239,19 +239,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -261,7 +260,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -321,6 +320,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -406,7 +406,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -492,8 +493,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -616,13 +625,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -689,7 +698,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -819,6 +828,28 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Update the failure time of the sequencesync worker in the subscription's
+ * apply worker.
+ *
+ * This function is invoked when the sequencesync worker exits due to a
+ * failure.
+ */
+void
+logicalrep_seqsyncworker_failure(int code, Datum arg)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->sequencesync_failure_time = GetCurrentTimestamp();
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -867,7 +898,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1182,7 +1213,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1329,7 +1360,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1369,6 +1400,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..19984cbfc2c
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,623 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+	/* Now safe to release the LWLock */
+	LWLockRelease(LogicalRepWorkerLock);
+
+	/*
+	 * If there is a free sync worker slot, start a new sequencesync worker,
+	 * and break from the loop.
+	 */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		/*
+		 * To prevent starting the sequencesync worker at a high frequency
+		 * after a failure, we store its last failure time. We start the
+		 * sequencesync worker again after waiting at least
+		 * wal_retrieve_retry_interval.
+		 */
+		if (!MyLogicalRepWorker->sequencesync_failure_time ||
+			TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+									   now, wal_retrieve_retry_interval))
+		{
+			MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+			if (!logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+										  MyLogicalRepWorker->dbid,
+										  MySubscription->oid,
+										  MySubscription->name,
+										  MyLogicalRepWorker->userid,
+										  InvalidOid,
+										  DSM_HANDLE_INVALID))
+				MyLogicalRepWorker->sequencesync_failure_time = now;
+		}
+	}
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, LSNOID, INT8OID,
+		INT8OID, BOOLOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN LATERAL pg_sequence_state(s.schname, s.seqname) ps ON true\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			XLogRecPtr	page_lsn;
+			int64		last_value;
+			int64		log_cnt;
+			bool		is_called;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *subsequences;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	List	   *sequences_to_copy = NIL;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	subsequences = GetSubscriptionRelations(subid, false, true, true);
+
+	foreach_ptr(SubscriptionRelState, subseq, subsequences)
+	{
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(subseq->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subseq->relid;
+		seq_info->remote_seq_fetched = false;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	list_free_deep(sequences_to_copy);
+
+	if (!run_as_owner && (subsequences != NIL))
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..17e37f2feee 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,24 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/* This is a clean exit, so no need for any sequence failure logic. */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		cancel_before_shmem_exit(logicalrep_seqsyncworker_failure, 0);
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +100,9 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -109,7 +122,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
+		{
+			bool	has_pending_sequences = false;
+
+			/*
+			 * We need up-to-date sync state info for subscription tables and
+			 * sequences here.
+			 */
+			FetchRelationStates(&has_pending_sequences);
 			ProcessSyncingTablesForApply(current_lsn);
+			if (has_pending_sequences)
+				ProcessSyncingSequencesForApply();
+
+			break;
+		}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +149,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the same
+	 * value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +177,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +186,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +199,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +228,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b66ac6eb865..5a8420f2ac7 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -1248,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1521,7 +1521,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1566,7 +1567,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1574,7 +1575,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1588,23 +1589,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 39a53c84e04..c49b025f16a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -482,6 +482,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1022,7 +1027,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1144,7 +1152,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1200,7 +1211,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1266,7 +1280,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1401,7 +1418,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2243,7 +2263,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3720,7 +3743,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -4529,7 +4555,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4649,8 +4676,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4729,6 +4756,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4748,14 +4779,17 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4800,6 +4834,9 @@ SetupApplyOrSyncWorker(int worker_slot)
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
 								  InvalidateRelationStates,
 								  (Datum) 0);
+
+	if (am_sequencesync_worker())
+		before_shmem_exit(logicalrep_seqsyncworker_failure, (Datum) 0);
 }
 
 /* Logical Replication Apply worker entry point */
@@ -4826,6 +4863,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4838,9 +4879,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..80181825240 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 2bbcdbb4afa..3bae4e6dc11 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5696,9 +5696,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 663b87a9c80..58767a85179 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz sequencesync_failure_time;
 } LogicalRepWorker;
 
 /*
@@ -241,6 +244,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +252,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_seqsyncworker_failure(int code, Datum arg);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,11 +270,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -332,15 +340,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b8a89275f13
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 673a34869aa..b38c684b3d7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250717-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250717-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 47900a8606aa1f657ddab84349f95c76fc9cc465 Mon Sep 17 00:00:00 2001
From: Nisha Moond <nisha.moond412@gmail.com>
Date: Mon, 30 Jun 2025 10:14:18 +0530
Subject: [PATCH v20250717 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.
---
 src/backend/catalog/pg_publication.c      |   4 +-
 src/backend/commands/publicationcmds.c    | 134 ++++--
 src/backend/parser/gram.y                 |  38 +-
 src/bin/pg_dump/pg_dump.c                 |  19 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  19 +-
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 558 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 707 insertions(+), 350 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..dc3f9ed3fbf 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1084,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..572ec057ad8 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -72,6 +72,8 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
 								  AlterPublicationStmt *stmt);
 static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
 static char defGetGeneratedColsOption(DefElem *def);
+static void process_all_objtype_list(List *all_objects_list, bool *all_tables,
+									 bool *all_sequences);
 
 
 static void
@@ -820,6 +822,41 @@ CheckPubRelationColumnList(char *pubname, List *tables,
 	}
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pubobjtype has been specified more than once.
+ */
+static void
+process_all_objtype_list(List *all_objects_list, bool *all_tables,
+						 bool *all_sequences)
+{
+	Assert(all_objects_list);
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Create new publication.
  */
@@ -841,6 +878,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	AclResult	aclresult;
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
+	bool		all_tables = false;
+	bool		all_sequences = false;
 
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
@@ -848,11 +887,22 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	if (stmt->puballobj)
+		process_all_objtype_list(stmt->pubobjects,
+								 &all_tables,
+								 &all_sequences);
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES publication"));
+		if (all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -884,8 +934,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
-	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballtables - 1] = all_tables;
+	values[Anum_pg_publication_puballsequences - 1] = all_sequences;
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -913,12 +963,12 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	CommandCounterIncrement();
 
 	/* Associate objects with the publication. */
-	if (stmt->for_all_tables)
+	if (all_tables)
 	{
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1440,6 +1490,8 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
+
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1504,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -1902,6 +1961,13 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
 					errcode(ERRCODE_SYNTAX_ERROR),
 					errmsg("column list must not be specified in ALTER PUBLICATION ... DROP"));
 
+		if (RelationGetForm(rel)->relkind == RELKIND_SEQUENCE)
+			ereport(ERROR,
+					errcode(ERRCODE_UNDEFINED_OBJECT),
+					errmsg("relation \"%s\" is not part of the publication",
+						   RelationGetRelationName(rel)),
+					errdetail_relkind_not_supported(RelationGetForm(rel)->relkind));
+
 		prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
 							   ObjectIdGetDatum(relid),
 							   ObjectIdGetDatum(pubid));
@@ -2019,19 +2085,27 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+			if (form->puballsequences)
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));
+			if (is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 73345bb3c70..a6dfd92f313 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -259,6 +259,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +585,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10677,7 +10679,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10697,13 +10704,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
+					n->pubobjects = (List *) $6;
+					n->puballobj = true;
 					n->options = $7;
-					n->for_all_tables = true;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10815,6 +10823,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c6226175528..cc22fda1858 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4392,6 +4392,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4422,9 +4423,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4440,6 +4446,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4460,6 +4467,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4511,8 +4520,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 39eef1d6617..b1a6a08f52b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -662,6 +662,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 2485d8f360e..b3bfbdc82bc 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3302,6 +3302,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index dd25d2fe7b8..3035b24f26f 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 37524364290..4ab0c2edb95 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3575,12 +3575,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..27e23d972fc 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,13 +4291,30 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
 	char	   *pubname;		/* Name of the publication */
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
-	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		puballobj;		/* True if the publication is for all tables,
+								 * all sequences, or both */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3a2eacd793f..749c316a107 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,92 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+ERROR:  invalid publication object list
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+ERROR:  invalid publication object list
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +320,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +338,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +370,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +386,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +405,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +416,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +452,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +465,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +583,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +878,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1071,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1282,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1325,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1340,10 +1408,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1421,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1450,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1476,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1547,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1558,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1579,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1591,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1603,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1614,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1625,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1636,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1667,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1679,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1761,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1782,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1917,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c9e309190df..1cf013e72f6 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, TABLES, SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ff050e93a50..8cd57e276f2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2349,6 +2349,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250717-0006-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250717-0006-Documentation-for-sequence-synchronization.patchDownload
From 0c632cffbdaf2e811270f0a4c2b0ed0ac43be399 Mon Sep 17 00:00:00 2001
From: Vignesh <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250717 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  76 +++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 464 insertions(+), 61 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 0d23bc1b122..cda4eb46bd3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8149,16 +8149,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8192,7 +8195,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8201,12 +8204,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c7acc0f182f..81c7afad3c8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5188,9 +5188,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5331,8 +5331,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5355,10 +5355,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index e26f7f59d4a..7cf90be2deb 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,223 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, an ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are either recreated on
+    the publisher, dropped on the subscriber, or removed from the
+    synchronization list. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this issue, either recreate the missing sequence on the
+    publisher using <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>,
+    drop the sequences on the subscriber if they are no longer needed using
+    <link linkend="sql-dropsequence"><command>DROP SEQUENCE</command></link>,
+    or run <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+    remove these sequences from synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2072,16 +2293,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2399,8 +2623,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2413,8 +2637,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 823afe1b30b..a1a2be86d38 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..0969a2d7075 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,20 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | ALL <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+    ALL TABLES
+    ALL SEQUENCES
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    TABLES
+    SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +126,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +157,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +301,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +320,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +472,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#267Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#266)
Re: Logical Replication of sequences

On Thu, Jul 17, 2025 at 4:52 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 16 Jul 2025 at 11:15, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Jul 14, 2025 at 10:03 AM vignesh C <vignesh21@gmail.com> wrote:

On Fri, 11 Jul 2025 at 14:26, shveta malik <shveta.malik@gmail.com> wrote:

I have picked this up again for final review, started with 0001, I
think mostly 0001 looks good to me, except few comments

1.
+ lsn = PageGetLSN(page);
+ last_value = seq->last_value;
+ log_cnt = seq->log_cnt;
+ is_called = seq->is_called;
+
+ UnlockReleaseBuffer(buf);
+ sequence_close(seqrel, NoLock);
+
+ /* Page LSN for the sequence */
+ values[0] = LSNGetDatum(lsn);
+
+ /* The value most recently returned by nextval in the current session */
+ values[1] = Int64GetDatum(last_value);
+

I think we can avoid using extra variables like lsn, last_value etc
instead we can directly copy into the value[$] as shown below.

values[0] = LSNGetDatum(PageGetLSN(page));
values[1] = Int64GetDatum(seq->last_value);
...
UnlockReleaseBuffer(buf);
sequence_close(seqrel, NoLock);

Modified

2.
+       <para>
+        Returns information about the sequence. <literal>page_lsn</literal> is
+        the page LSN of the sequence, <literal>last_value</literal> is the
+        current value of the sequence, <literal>log_cnt</literal> shows how
+        many fetches remain before a new WAL record must be written, and
+        <literal>is_called</literal> indicates whether the sequence has been
+        used.

Shall we change 'is the page LSN of the sequence' to 'is the page LSN
of the sequence relation'

Modified

And I think this field doesn't seem to be very relevant for the user,
although we are exposing it because we need it for internal use.
Maybe at least the commit message of this patch should give some
details on why we need to expose this field.

Updated commit message

The attached v20250717 version patch has the changes for the same.

Thanks, 0001 looks fine after this, I will share the feedback for other patches.

--
Regards,
Dilip Kumar
Google

#268Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#267)
Re: Logical Replication of sequences

On Fri, Jul 18, 2025 at 10:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Jul 17, 2025 at 4:52 PM vignesh C <vignesh21@gmail.com> wrote:

I was looking at the high level idea of sequence sync worker patch
i.e. 0005, so far I haven't found anything problematic there, but I
haven't completed the review and testing yet. Here are some comments
I have while reading through the patch. I will try to do more
thorough review and testing next week.

1.
+ /*
+ * Count running sync workers for this subscription, while we have the
+ * lock.
+ */
+ nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+ /* Now safe to release the LWLock */
+ LWLockRelease(LogicalRepWorkerLock);
+
+ /*
+ * If there is a free sync worker slot, start a new sequencesync worker,
+ * and break from the loop.
+ */
+ if (nsyncworkers < max_sync_workers_per_subscription)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+
+ /*
+ * To prevent starting the sequencesync worker at a high frequency
+ * after a failure, we store its last failure time. We start the
+ * sequencesync worker again after waiting at least
+ * wal_retrieve_retry_interval.
+ */
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+ if (!logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+   MyLogicalRepWorker->dbid,
+   MySubscription->oid,
+   MySubscription->name,
+   MyLogicalRepWorker->userid,
+   InvalidOid,
+   DSM_HANDLE_INVALID))
+ MyLogicalRepWorker->sequencesync_failure_time = now;
+ }

This code seems to duplicate much of the logic found in
ProcessSyncingTablesForApply() within its final else block, with only
minor differences (perhaps 1-2 lines).

To improve code maintainability and avoid redundancy, consider
extracting the common logic into a static function. This function
could then be called from both places.

2.
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */

Change to "Common function to setup the leader apply, tablesync and
sequencesync worker"

3.
+ /*
+ * To prevent starting the sequencesync worker at a high frequency
+ * after a failure, we store its last failure time. We start the
+ * sequencesync worker again after waiting at least
+ * wal_retrieve_retry_interval.
+ */

We haven't explained what's the rationale behind comparing with the
last failure time for sequence sync worker whereas for table sync
worker we compare with last start time.

--
Regards,
Dilip Kumar
Google

#269Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#266)
Re: Logical Replication of sequences

On Thu, Jul 17, 2025 at 4:52 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20250717 version patch has the changes for the same.

Few comments on 0001 and 0002:
0001
1. Instead of introducing a new function, can we think of extending
the existing function pg_get_sequence_data()?

0002
2.
postgres=# Create publication pub2 for all tables, sequences;
CREATE PUBLICATION
...
postgres=# Create publication pub3 for all tables, all sequences;
ERROR: syntax error at or near "all"
LINE 1: Create publication pub3 for all tables, all sequences;

I was expecting first syntax to give ERROR and second to work. I have
given this comment in my earlier email[1]/messages/by-id/CAA4eK1+6L+AoGS3LHdnYnCE=nRHergSQyhyO7Y=-sOp7isGVMw@mail.gmail.com. I see a follow up response
by Nisha indicating that she agreed with it [2]/messages/by-id/CABdArM52CSDuYsfTAEp4ZSWe+GFBvxgnPFgkG+id9T88DUE+1Q@mail.gmail.com. By any chance, did
you people misunderstood and implemented the reverse of what I asked?

3.
postgres=> Create publication pub3 for all tables, sequences;
ERROR: must be superuser to create a FOR ALL TABLES publication

In the above, the publication is both FOR ALL TABLES and ALL
SEQUENCES. Won't it be better to give a message like: "must be
superuser to create a FOR ALL TABLES OR ALL SEQUENCES publication"? I
think we can give this same message in cases where publication is (a)
FOR ALL TABLES, (b) FOR ALL SEQUENCES, or (c) FOR ALL TABLES,
SEQUENCES.

Whatever we decide here, we can follow that in other parts of the
patch (where applicable) as well. For example, one case is as below:
+ if (!superuser_arg(newOwnerId))
+ {
+ if (form->puballtables)
+ ereport(ERROR,
+ errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied to change owner of publication \"%s\"",
+    NameStr(form->pubname)),
+ errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+ if (form->puballsequences)
+ ereport(ERROR,
+ errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied to change owner of publication \"%s\"",
+    NameStr(form->pubname)),
+ errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));

[1]: /messages/by-id/CAA4eK1+6L+AoGS3LHdnYnCE=nRHergSQyhyO7Y=-sOp7isGVMw@mail.gmail.com
[2]: /messages/by-id/CABdArM52CSDuYsfTAEp4ZSWe+GFBvxgnPFgkG+id9T88DUE+1Q@mail.gmail.com

--
With Regards,
Amit Kapila.

#270vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#269)
6 attachment(s)
Re: Logical Replication of sequences

On Fri, 18 Jul 2025 at 16:15, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Jul 17, 2025 at 4:52 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20250717 version patch has the changes for the same.

Few comments on 0001 and 0002:
0001
1. Instead of introducing a new function, can we think of extending
the existing function pg_get_sequence_data()?

Yes, this can be extended. Modified.

0002
2.
postgres=# Create publication pub2 for all tables, sequences;
CREATE PUBLICATION
...
postgres=# Create publication pub3 for all tables, all sequences;
ERROR: syntax error at or near "all"
LINE 1: Create publication pub3 for all tables, all sequences;

I was expecting first syntax to give ERROR and second to work. I have
given this comment in my earlier email[1]. I see a follow up response
by Nisha indicating that she agreed with it [2]. By any chance, did
you people misunderstood and implemented the reverse of what I asked?

This was a misunderstanding here. Updated accordingly.

3.
postgres=> Create publication pub3 for all tables, sequences;
ERROR: must be superuser to create a FOR ALL TABLES publication

In the above, the publication is both FOR ALL TABLES and ALL
SEQUENCES. Won't it be better to give a message like: "must be
superuser to create a FOR ALL TABLES OR ALL SEQUENCES publication"? I
think we can give this same message in cases where publication is (a)
FOR ALL TABLES, (b) FOR ALL SEQUENCES, or (c) FOR ALL TABLES,
SEQUENCES.

Modified

Whatever we decide here, we can follow that in other parts of the
patch (where applicable) as well. For example, one case is as below:
+ if (!superuser_arg(newOwnerId))
+ {
+ if (form->puballtables)
+ ereport(ERROR,
+ errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied to change owner of publication \"%s\"",
+    NameStr(form->pubname)),
+ errhint("The owner of a FOR ALL TABLES publication must be a superuser."));
+ if (form->puballsequences)
+ ereport(ERROR,
+ errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied to change owner of publication \"%s\"",
+    NameStr(form->pubname)),
+ errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser."));

Modified here too.

The attached v20250720 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250720-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250720-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 73fba3ebc9c628f7f1ac002cd4d5e9ea0ec43a19 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sun, 20 Jul 2025 18:45:14 +0530
Subject: [PATCH v20250720 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for 
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 373 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 14 files changed, 465 insertions(+), 111 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index dc3f9ed3fbf..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1062,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1334,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1c71161e723..ebd5605afe3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -462,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -499,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -511,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -524,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -538,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -560,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index b2d5332effc..43c3d9c2975 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e23b0de7242..bd575237d5d 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -26,6 +26,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -103,6 +104,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  char *origin, Oid *subrel_local_oids,
@@ -692,6 +694,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating a
+	 * replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -703,9 +713,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -720,6 +727,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.origin, NULL, 0, stmt->subname);
@@ -731,13 +742,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -754,6 +768,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -777,7 +797,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -816,18 +836,55 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -835,10 +892,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -858,16 +922,23 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -880,22 +951,18 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->origin, subrel_local_oids,
-								  subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->origin, subrel_local_oids,
+									  subrel_count, sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -904,12 +971,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -920,28 +988,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -963,41 +1051,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1006,10 +1108,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1023,11 +1125,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1393,8 +1497,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1408,7 +1512,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1448,8 +1553,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1467,18 +1572,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1490,8 +1596,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1503,12 +1609,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1773,7 +1893,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2087,8 +2207,8 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * its partition ancestors (if it's a partition), or its partition children (if
  * it's a partitioned table), from some other publishers. This check is
  * required only if "copy_data = true" and "origin = none" for CREATE
- * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements to notify the
- * user that data having origin might have been copied.
+ * SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION statements to
+ * notify the user that data having origin might have been copied.
  *
  * This check need not be performed on the tables that are already added
  * because incremental sync for those tables will happen through WAL and the
@@ -2127,18 +2247,23 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables.
 	 */
 	for (i = 0; i < subrel_count; i++)
 	{
 		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
 
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-						 schemaname, tablename);
+		if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+		{
+			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+			char	   *tablename = get_rel_name(relid);
+
+			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							 schemaname, tablename);
+		}
 	}
 
 	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
@@ -2307,6 +2432,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f262e7a66f7..b58e81424ab 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9888e33d8df..25cf78e0e30 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10958,11 +10958,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 4ab0c2edb95..f4f8dec6710 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2309,7 +2309,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8360c2a3e89..1564ef43621 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12278,6 +12278,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..1af265aa174 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..c2e9583cdb7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index dce8c672b40..8b2c407ccdb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2175,6 +2183,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2185,7 +2194,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index 529b2241731..14dad19158b 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250720-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250720-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From fe0e6335f55112d8b27013052b547bbe5c09cdec Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sun, 20 Jul 2025 18:23:41 +0530
Subject: [PATCH v20250720 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   4 +-
 src/backend/commands/publicationcmds.c    |  74 +--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  19 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  22 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 564 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 698 insertions(+), 350 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..dc3f9ed3fbf 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1084,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..696227a2b8f 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,14 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -884,8 +887,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
-	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballtables - 1] = stmt->for_all_tables;
+	values[Anum_pg_publication_puballsequences - 1] = stmt->for_all_sequences;
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -918,7 +921,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1440,6 +1443,8 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
+
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1457,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -1902,6 +1914,13 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
 					errcode(ERRCODE_SYNTAX_ERROR),
 					errmsg("column list must not be specified in ALTER PUBLICATION ... DROP"));
 
+		if (RelationGetForm(rel)->relkind == RELKIND_SEQUENCE)
+			ereport(ERROR,
+					errcode(ERRCODE_UNDEFINED_OBJECT),
+					errmsg("relation \"%s\" is not part of the publication",
+						   RelationGetRelationName(rel)),
+					errdetail_relkind_not_supported(RelationGetForm(rel)->relkind));
+
 		prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
 							   ObjectIdGetDatum(relid),
 							   ObjectIdGetDatum(pubid));
@@ -2019,19 +2038,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 73345bb3c70..9888e33d8df 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +450,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10677,7 +10683,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10697,13 +10708,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10815,6 +10827,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19627,6 +19661,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 604fc109416..7b4dbf0a3fd 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4457,6 +4457,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4487,9 +4488,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4505,6 +4511,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4525,6 +4532,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4576,8 +4585,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 39eef1d6617..b1a6a08f52b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -662,6 +662,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index d8330e2bd17..b7f9a5d5b11 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3304,6 +3304,28 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index dd25d2fe7b8..3035b24f26f 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 37524364290..4ab0c2edb95 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3575,12 +3575,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3a2eacd793f..e3f14f880fe 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +324,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +342,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +374,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +390,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +409,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +420,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +456,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +469,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +587,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +882,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1075,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1286,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1329,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1331,7 +1403,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1340,10 +1412,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1425,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1454,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1480,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1551,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1562,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1595,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1618,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1629,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1640,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1671,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1683,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1765,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1786,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1921,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1952,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c9e309190df..8ad72420970 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ff050e93a50..8cd57e276f2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2349,6 +2349,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250720-0006-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250720-0006-Documentation-for-sequence-synchronization.patchDownload
From 94c7db6ff6a1247b84ca9a865d54eaed370a8d0d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250720 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 462 insertions(+), 61 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 0d23bc1b122..cda4eb46bd3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8149,16 +8149,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8192,7 +8195,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8201,12 +8204,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c7acc0f182f..81c7afad3c8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5188,9 +5188,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5331,8 +5331,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5355,10 +5355,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index e26f7f59d4a..7cf90be2deb 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,223 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, an ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are either recreated on
+    the publisher, dropped on the subscriber, or removed from the
+    synchronization list. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this issue, either recreate the missing sequence on the
+    publisher using <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>,
+    drop the sequences on the subscriber if they are no longer needed using
+    <link linkend="sql-dropsequence"><command>DROP SEQUENCE</command></link>,
+    or run <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+    remove these sequences from synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2072,16 +2293,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2399,8 +2623,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2413,8 +2637,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 823afe1b30b..a1a2be86d38 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index fdc648d007f..0ecc91b6fc1 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 57dec28a5df..44308515bbb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -263,6 +263,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250720-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250720-0001-Enhance-pg_get_sequence_data-function.patchDownload
From cc595badc050c2cf3a010a9baaf6d1c5a3211641 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sun, 20 Jul 2025 18:19:47 +0530
Subject: [PATCH v20250720 1/6] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 +++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 47 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index f5a0e0954a1..d687e72127a 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        is the current value of the sequence, <literal>is_called</literal>
+        indicates whether the sequence has been used, <literal>log_cnt</literal>
+        shows how many fetches remain before a new WAL record must be written,
+        and <literal>page_lsn</literal> is the page LSN of the sequence
+        relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..f5fa49517cf 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1778,15 +1779,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1801,6 +1803,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1816,11 +1822,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 1fc19146f46..8360c2a3e89 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..452e1f0cc28 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt 
+------------+-----------+---------
+         10 | t         |      32
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..b772d5166d5 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250720-0005-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250720-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 5a5eadc3bfcc3963a95fe4bdbf734d9086f8a25e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sat, 19 Jul 2025 22:40:23 +0530
Subject: [PATCH v20250720 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 585 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 135 +++-
 src/backend/replication/logical/tablesync.c   |  97 +--
 src/backend/replication/logical/worker.c      |  69 ++-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 24 files changed, 1185 insertions(+), 153 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index ebd5605afe3..b8f415cd50d 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -337,7 +337,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 43c3d9c2975..485f6be15b7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1404,6 +1404,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index f5fa49517cf..708306b3b1c 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1830,6 +1832,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index bd575237d5d..fb410c5e503 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1066,7 +1066,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -1870,7 +1870,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 4aed0dfcebb..f14a6b7a5bf 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -239,19 +239,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -261,7 +260,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -321,6 +320,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -406,7 +406,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -492,8 +493,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -616,13 +625,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -689,7 +698,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -819,6 +828,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_start_time of the sequencesync worker in the subscription's
+ * apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -867,7 +895,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1182,7 +1210,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1329,7 +1357,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1369,6 +1397,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..329ce71d49c
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,585 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	check_and_launch_sync_worker(InvalidOid,
+								 &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			int64		last_value;
+			bool		is_called;
+			int64		log_cnt;
+			XLogRecPtr	page_lsn;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *subsequences;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	List	   *sequences_to_copy = NIL;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	subsequences = GetSubscriptionRelations(subid, false, true, true);
+
+	foreach_ptr(SubscriptionRelState, subseq, subsequences)
+	{
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(subseq->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subseq->relid;
+		seq_info->remote_seq_fetched = false;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	list_free_deep(sequences_to_copy);
+
+	if (!run_as_owner && (subsequences != NIL))
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..376a738e199 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	logicalrep_reset_seqsync_start_time();
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +102,59 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a worker
+ * available and the retry interval has elapsed.
+ *
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ *
+ * The caller must hold LogicalRepWorkerLock before calling this function.
+ */
+void
+check_and_launch_sync_worker(Oid relid, TimestampTz *last_start_time)
+{
+	int			nsyncworkers;
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+	/* Now safe to release the LWLock */
+	LWLockRelease(LogicalRepWorkerLock);
+
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +162,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +175,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +202,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +230,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +239,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +252,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +281,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index b66ac6eb865..188dcdcb845 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -523,49 +523,16 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			}
 			else
 			{
-				/*
-				 * If there is no sync worker for this table yet, count
-				 * running sync workers for this subscription, while we have
-				 * the lock.
-				 */
-				int			nsyncworkers =
-					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
-				/* Now safe to release the LWLock */
-				LWLockRelease(LogicalRepWorkerLock);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID);
-					}
-				}
+				check_and_launch_sync_worker(rstate->relid,
+											 &hentry->last_start_time);
 			}
 		}
 	}
@@ -1248,7 +1215,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1521,7 +1488,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1566,7 +1534,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1574,7 +1542,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1588,23 +1556,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 39a53c84e04..c0b3c0528f1 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -482,6 +482,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1022,7 +1027,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1144,7 +1152,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1200,7 +1211,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1266,7 +1280,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1401,7 +1418,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2243,7 +2263,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3720,7 +3743,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -4529,7 +4555,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -4649,8 +4676,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -4729,6 +4756,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -4748,14 +4779,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -4826,6 +4859,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -4838,9 +4875,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..80181825240 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 1564ef43621..a82d9ca1973 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 663b87a9c80..a019db98e42 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -92,6 +93,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -241,6 +244,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -248,13 +252,17 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid dbid, Oid subid, const char *subname,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void check_and_launch_sync_worker(Oid relid, TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
 
+extern void logicalrep_reset_seqsync_start_time(void);
+
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
@@ -263,11 +271,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -332,15 +341,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b8a89275f13
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 673a34869aa..b38c684b3d7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250720-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250720-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 233f84f5c464470ae1fc6db0d51781e889856e37 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250720 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 13 files changed, 238 insertions(+), 194 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..1c71161e723 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -488,13 +488,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index d25085d3515..ee98922c237 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -962,7 +962,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e4fd6347fd1..b66ac6eb865 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -664,37 +612,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1331,7 +1248,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1572,77 +1489,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1728,7 +1574,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1746,7 +1592,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index c5fb627aa56..39a53c84e04 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1023,7 +1023,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1145,7 +1145,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1201,7 +1201,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1267,7 +1267,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1402,7 +1402,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2244,7 +2244,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3721,7 +3721,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4798,7 +4798,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 7b4dbf0a3fd..2d14c615073 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5209,12 +5209,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5268,7 +5268,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b1a6a08f52b..1ceb25bdcde 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -819,6 +819,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 30b2775952c..663b87a9c80 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -237,6 +237,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -259,9 +261,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8cd57e276f2..673a34869aa 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2911,7 +2911,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

#271vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#268)
Re: Logical Replication of sequences

On Fri, 18 Jul 2025 at 14:11, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Fri, Jul 18, 2025 at 10:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Jul 17, 2025 at 4:52 PM vignesh C <vignesh21@gmail.com> wrote:

I was looking at the high level idea of sequence sync worker patch
i.e. 0005, so far I haven't found anything problematic there, but I
haven't completed the review and testing yet. Here are some comments
I have while reading through the patch. I will try to do more
thorough review and testing next week.

1.
+ /*
+ * Count running sync workers for this subscription, while we have the
+ * lock.
+ */
+ nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+ /* Now safe to release the LWLock */
+ LWLockRelease(LogicalRepWorkerLock);
+
+ /*
+ * If there is a free sync worker slot, start a new sequencesync worker,
+ * and break from the loop.
+ */
+ if (nsyncworkers < max_sync_workers_per_subscription)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+
+ /*
+ * To prevent starting the sequencesync worker at a high frequency
+ * after a failure, we store its last failure time. We start the
+ * sequencesync worker again after waiting at least
+ * wal_retrieve_retry_interval.
+ */
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+ if (!logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+   MyLogicalRepWorker->dbid,
+   MySubscription->oid,
+   MySubscription->name,
+   MyLogicalRepWorker->userid,
+   InvalidOid,
+   DSM_HANDLE_INVALID))
+ MyLogicalRepWorker->sequencesync_failure_time = now;
+ }

This code seems to duplicate much of the logic found in
ProcessSyncingTablesForApply() within its final else block, with only
minor differences (perhaps 1-2 lines).

To improve code maintainability and avoid redundancy, consider
extracting the common logic into a static function. This function
could then be called from both places.

Modified

2.
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */

Change to "Common function to setup the leader apply, tablesync and
sequencesync worker"

Modified

3.
+ /*
+ * To prevent starting the sequencesync worker at a high frequency
+ * after a failure, we store its last failure time. We start the
+ * sequencesync worker again after waiting at least
+ * wal_retrieve_retry_interval.
+ */

We haven't explained what's the rationale behind comparing with the
last failure time for sequence sync worker whereas for table sync
worker we compare with last start time.

Since we use a single sequencesync worker to handle all sequence
synchronization, I considered marking a failure when the worker exits
and using that as a trigger for retries. However, since tablesync
relies on the start time for retries, it would be more consistent to
apply the same approach here.

The v20250720 version patch attached at [1]/messages/by-id/CALDaNm2swnY6nYAg==7-4ah3yyaBQ_5wyr57p=+vtpfuSOT+ag@mail.gmail.com has the changes for the same.
[1]: /messages/by-id/CALDaNm2swnY6nYAg==7-4ah3yyaBQ_5wyr57p=+vtpfuSOT+ag@mail.gmail.com

Regards,
Vignesh

#272Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#271)
Re: Logical Replication of sequences

On Sun, Jul 20, 2025 at 7:48 PM vignesh C <vignesh21@gmail.com> wrote:

On Fri, 18 Jul 2025 at 14:11, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Fri, Jul 18, 2025 at 10:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Jul 17, 2025 at 4:52 PM vignesh C <vignesh21@gmail.com> wrote:

I was looking at the high level idea of sequence sync worker patch
i.e. 0005, so far I haven't found anything problematic there, but I
haven't completed the review and testing yet. Here are some comments
I have while reading through the patch. I will try to do more
thorough review and testing next week.

1.
+ /*
+ * Count running sync workers for this subscription, while we have the
+ * lock.
+ */
+ nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+ /* Now safe to release the LWLock */
+ LWLockRelease(LogicalRepWorkerLock);
+
+ /*
+ * If there is a free sync worker slot, start a new sequencesync worker,
+ * and break from the loop.
+ */
+ if (nsyncworkers < max_sync_workers_per_subscription)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+
+ /*
+ * To prevent starting the sequencesync worker at a high frequency
+ * after a failure, we store its last failure time. We start the
+ * sequencesync worker again after waiting at least
+ * wal_retrieve_retry_interval.
+ */
+ if (!MyLogicalRepWorker->sequencesync_failure_time ||
+ TimestampDifferenceExceeds(MyLogicalRepWorker->sequencesync_failure_time,
+    now, wal_retrieve_retry_interval))
+ {
+ MyLogicalRepWorker->sequencesync_failure_time = 0;
+
+ if (!logicalrep_worker_launch(WORKERTYPE_SEQUENCESYNC,
+   MyLogicalRepWorker->dbid,
+   MySubscription->oid,
+   MySubscription->name,
+   MyLogicalRepWorker->userid,
+   InvalidOid,
+   DSM_HANDLE_INVALID))
+ MyLogicalRepWorker->sequencesync_failure_time = now;
+ }

This code seems to duplicate much of the logic found in
ProcessSyncingTablesForApply() within its final else block, with only
minor differences (perhaps 1-2 lines).

To improve code maintainability and avoid redundancy, consider
extracting the common logic into a static function. This function
could then be called from both places.

Modified

2.
+/*
+ * Common function to setup the leader apply, tablesync worker and sequencesync
+ * worker.
+ */

Change to "Common function to setup the leader apply, tablesync and
sequencesync worker"

Modified

3.
+ /*
+ * To prevent starting the sequencesync worker at a high frequency
+ * after a failure, we store its last failure time. We start the
+ * sequencesync worker again after waiting at least
+ * wal_retrieve_retry_interval.
+ */

We haven't explained what's the rationale behind comparing with the
last failure time for sequence sync worker whereas for table sync
worker we compare with last start time.

Since we use a single sequencesync worker to handle all sequence
synchronization, I considered marking a failure when the worker exits
and using that as a trigger for retries. However, since tablesync
relies on the start time for retries, it would be more consistent to
apply the same approach here.

The v20250720 version patch attached at [1] has the changes for the same.
[1] - /messages/by-id/CALDaNm2swnY6nYAg==7-4ah3yyaBQ_5wyr57p=+vtpfuSOT+ag@mail.gmail.com

I was just trying a different test, so I realized that ALTER
PUBLICATION ADD SEQUENCE is not supported, any reason for the same?

postgres[154731]=# ALTER PUBLICATION pub ADD sequence s1;
ERROR: 42601: invalid publication object list
LINE 1: ALTER PUBLICATION pub ADD sequence s1;
DETAIL: One of TABLE or TABLES IN SCHEMA must be specified before a
standalone table or schema name.
LOCATION: preprocess_pubobj_list, gram.y:19685

--
Regards,
Dilip Kumar
Google

#273Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#272)
Re: Logical Replication of sequences

On Mon, Jul 21, 2025 at 10:36 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I was just trying a different test, so I realized that ALTER
PUBLICATION ADD SEQUENCE is not supported, any reason for the same?

postgres[154731]=# ALTER PUBLICATION pub ADD sequence s1;
ERROR: 42601: invalid publication object list
LINE 1: ALTER PUBLICATION pub ADD sequence s1;
DETAIL: One of TABLE or TABLES IN SCHEMA must be specified before a
standalone table or schema name.
LOCATION: preprocess_pubobj_list, gram.y:19685

Also I noticed that
1. We don't allow creating publication with individual sequences (e.g.
CREATE PUBLICATION pub FOR SEQUENCE s1;). Is it because the main
purpose of this sync is major version upgrade and we do not have
scenarios for replicating a few sequences or there are some technical
difficulties in achieving that or both.

2. This syntax works (CREATE PUBLICATION pub FOR ALL TABLES,
SEQUENCES;) but tab completion doesn't suggest this

3. Some of the syntaxes works for sequence which doesn't make sense to
me, as listed below, I think there are more

postgres[154731]=# CREATE PUBLICATION insert_only FOR ALL SEQUENCES
WITH (publish = 'insert');
CREATE PUBLICATION

postgres[154731]=# CREATE PUBLICATION pub FOR ALL SEQUENCES WITH (
PUBLISH_VIA_PARTITION_ROOT );
CREATE PUBLICATION

--
Regards,
Dilip Kumar
Google

#274shveta malik
shveta.malik@gmail.com
In reply to: Dilip Kumar (#273)
Re: Logical Replication of sequences

On Mon, Jul 21, 2025 at 11:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

3. Some of the syntaxes works for sequence which doesn't make sense to
me, as listed below, I think there are more

postgres[154731]=# CREATE PUBLICATION insert_only FOR ALL SEQUENCES
WITH (publish = 'insert');
CREATE PUBLICATION

postgres[154731]=# CREATE PUBLICATION pub FOR ALL SEQUENCES WITH (
PUBLISH_VIA_PARTITION_ROOT );
CREATE PUBLICATION

+1. I had the same concerns at [1]/messages/by-id/CAJpy0uBFXJLOiFOL8QgSeS93Bf=ZQd86BXZa=MeijHQo-=a2cA@mail.gmail.com. It might be feasible to restrict
this if we have CREATE SUB for ALL SEQ alone. But if we have ALL
SEQUENCES and ALL TABLES together, then 'WITH' makes sense for tables
but not for sequences. My suggestion earlier was to display a NOTICE
at-least to say that WITH is not applicable to SEQUENCES (in case we
can not restrict it).

[1]: /messages/by-id/CAJpy0uBFXJLOiFOL8QgSeS93Bf=ZQd86BXZa=MeijHQo-=a2cA@mail.gmail.com

thanks
Shveta

#275vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#272)
Re: Logical Replication of sequences

On Mon, 21 Jul 2025 at 10:36, Dilip Kumar <dilipbalaut@gmail.com> wrote:

I was just trying a different test, so I realized that ALTER
PUBLICATION ADD SEQUENCE is not supported, any reason for the same?

postgres[154731]=# ALTER PUBLICATION pub ADD sequence s1;
ERROR: 42601: invalid publication object list
LINE 1: ALTER PUBLICATION pub ADD sequence s1;
DETAIL: One of TABLE or TABLES IN SCHEMA must be specified before a
standalone table or schema name.
LOCATION: preprocess_pubobj_list, gram.y:19685

I have intentionally left this out for now. Once the current patch is
committed, we can extend it.

Regards,
Vignesh

#276vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#273)
Re: Logical Replication of sequences

On Mon, 21 Jul 2025 at 11:15, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Jul 21, 2025 at 10:36 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I was just trying a different test, so I realized that ALTER
PUBLICATION ADD SEQUENCE is not supported, any reason for the same?

postgres[154731]=# ALTER PUBLICATION pub ADD sequence s1;
ERROR: 42601: invalid publication object list
LINE 1: ALTER PUBLICATION pub ADD sequence s1;
DETAIL: One of TABLE or TABLES IN SCHEMA must be specified before a
standalone table or schema name.
LOCATION: preprocess_pubobj_list, gram.y:19685

Also I noticed that
1. We don't allow creating publication with individual sequences (e.g.
CREATE PUBLICATION pub FOR SEQUENCE s1;). Is it because the main
purpose of this sync is major version upgrade and we do not have
scenarios for replicating a few sequences or there are some technical
difficulties in achieving that or both.

There are no technical difficulties here. The main goal was to support
all sequences necessary for the upgrade scenario. Once that is
complete, the implementation can be extended based on additional use
cases.

2. This syntax works (CREATE PUBLICATION pub FOR ALL TABLES,
SEQUENCES;) but tab completion doesn't suggest this

Nisha had analysed this and shared this earlier at [1]/messages/by-id/CABdArM5axwoTorZnJww5rE79SNzvnnXCfWkv7XJex1Rkz=JDog@mail.gmail.com:
Tab-completion is not supported after a comma (,) in any other cases.
For example, the following commands are valid, but tab-completion does
not work after the comma:
CREATE PUBLICATION pub7 FOR TABLE t1, TABLES IN SCHEMA public;
CREATE PUBLICATION pub7 FOR TABLES IN SCHEMA public, TABLES IN SCHEMA schema2;

3. Some of the syntaxes works for sequence which doesn't make sense to
me, as listed below, I think there are more

postgres[154731]=# CREATE PUBLICATION insert_only FOR ALL SEQUENCES
WITH (publish = 'insert');
CREATE PUBLICATION

postgres[154731]=# CREATE PUBLICATION pub FOR ALL SEQUENCES WITH (
PUBLISH_VIA_PARTITION_ROOT );
CREATE PUBLICATION

There is a documentation for this at sql-createpublication.html:
WITH ( publication_parameter [= value] [, ... ] )
This clause specifies optional parameters for a publication when
publishing tables. This clause is not applicable for sequences.

I felt it was enough, should we do anything more here?

[1]: /messages/by-id/CABdArM5axwoTorZnJww5rE79SNzvnnXCfWkv7XJex1Rkz=JDog@mail.gmail.com

Regards,
Vignesh

#277Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#276)
Re: Logical Replication of sequences

On Mon, Jul 21, 2025 at 2:36 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 21 Jul 2025 at 11:15, Dilip Kumar <dilipbalaut@gmail.com> wrote:

3. Some of the syntaxes works for sequence which doesn't make sense to
me, as listed below, I think there are more

postgres[154731]=# CREATE PUBLICATION insert_only FOR ALL SEQUENCES
WITH (publish = 'insert');
CREATE PUBLICATION

postgres[154731]=# CREATE PUBLICATION pub FOR ALL SEQUENCES WITH (
PUBLISH_VIA_PARTITION_ROOT );
CREATE PUBLICATION

There is a documentation for this at sql-createpublication.html:
WITH ( publication_parameter [= value] [, ... ] )
This clause specifies optional parameters for a publication when
publishing tables. This clause is not applicable for sequences.

I felt it was enough, should we do anything more here?

It would be better if we can give ERROR for options that are not
specific to sequences.

--
With Regards,
Amit Kapila.

#278Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#275)
Re: Logical Replication of sequences

On Mon, Jul 21, 2025 at 2:23 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 21 Jul 2025 at 10:36, Dilip Kumar <dilipbalaut@gmail.com> wrote:

I was just trying a different test, so I realized that ALTER
PUBLICATION ADD SEQUENCE is not supported, any reason for the same?

postgres[154731]=# ALTER PUBLICATION pub ADD sequence s1;
ERROR: 42601: invalid publication object list
LINE 1: ALTER PUBLICATION pub ADD sequence s1;
DETAIL: One of TABLE or TABLES IN SCHEMA must be specified before a
standalone table or schema name.
LOCATION: preprocess_pubobj_list, gram.y:19685

I have intentionally left this out for now. Once the current patch is
committed, we can extend it.

Okay

--
Regards,
Dilip Kumar
Google

#279Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#276)
Re: Logical Replication of sequences

On Mon, Jul 21, 2025 at 2:36 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 21 Jul 2025 at 11:15, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Jul 21, 2025 at 10:36 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I was just trying a different test, so I realized that ALTER
PUBLICATION ADD SEQUENCE is not supported, any reason for the same?

postgres[154731]=# ALTER PUBLICATION pub ADD sequence s1;
ERROR: 42601: invalid publication object list
LINE 1: ALTER PUBLICATION pub ADD sequence s1;
DETAIL: One of TABLE or TABLES IN SCHEMA must be specified before a
standalone table or schema name.
LOCATION: preprocess_pubobj_list, gram.y:19685

Also I noticed that
1. We don't allow creating publication with individual sequences (e.g.
CREATE PUBLICATION pub FOR SEQUENCE s1;). Is it because the main
purpose of this sync is major version upgrade and we do not have
scenarios for replicating a few sequences or there are some technical
difficulties in achieving that or both.

There are no technical difficulties here. The main goal was to support
all sequences necessary for the upgrade scenario. Once that is
complete, the implementation can be extended based on additional use
cases.

2. This syntax works (CREATE PUBLICATION pub FOR ALL TABLES,
SEQUENCES;) but tab completion doesn't suggest this

Nisha had analysed this and shared this earlier at [1]:
Tab-completion is not supported after a comma (,) in any other cases.
For example, the following commands are valid, but tab-completion does
not work after the comma:
CREATE PUBLICATION pub7 FOR TABLE t1, TABLES IN SCHEMA public;
CREATE PUBLICATION pub7 FOR TABLES IN SCHEMA public, TABLES IN SCHEMA schema2;

3. Some of the syntaxes works for sequence which doesn't make sense to
me, as listed below, I think there are more

postgres[154731]=# CREATE PUBLICATION insert_only FOR ALL SEQUENCES
WITH (publish = 'insert');
CREATE PUBLICATION

postgres[154731]=# CREATE PUBLICATION pub FOR ALL SEQUENCES WITH (
PUBLISH_VIA_PARTITION_ROOT );
CREATE PUBLICATION

There is a documentation for this at sql-createpublication.html:
WITH ( publication_parameter [= value] [, ... ] )
This clause specifies optional parameters for a publication when
publishing tables. This clause is not applicable for sequences.

I felt it was enough, should we do anything more here?

[1] - /messages/by-id/CABdArM5axwoTorZnJww5rE79SNzvnnXCfWkv7XJex1Rkz=JDog@mail.gmail.com

I meant the PUBLISH_VIA_PARTITION_ROOT and other options inside the
WITH() that are not applicable for SEQUENCES, so shall we throw an
error if we are only publishing sequences and using these options?

--
Regards,
Dilip Kumar
Google

#280shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#277)
Re: Logical Replication of sequences

On Mon, Jul 21, 2025 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jul 21, 2025 at 2:36 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 21 Jul 2025 at 11:15, Dilip Kumar <dilipbalaut@gmail.com> wrote:

3. Some of the syntaxes works for sequence which doesn't make sense to
me, as listed below, I think there are more

postgres[154731]=# CREATE PUBLICATION insert_only FOR ALL SEQUENCES
WITH (publish = 'insert');
CREATE PUBLICATION

postgres[154731]=# CREATE PUBLICATION pub FOR ALL SEQUENCES WITH (
PUBLISH_VIA_PARTITION_ROOT );
CREATE PUBLICATION

There is a documentation for this at sql-createpublication.html:
WITH ( publication_parameter [= value] [, ... ] )
This clause specifies optional parameters for a publication when
publishing tables. This clause is not applicable for sequences.

I felt it was enough, should we do anything more here?

It would be better if we can give ERROR for options that are not
specific to sequences.

Alter-PUB also should give an error then. Currently it works

postgres=# alter publication pub1 set (publish='insert,update');
ALTER PUBLICATION

pub1 here is an all-sequence publication.

Also, we need to decide behaviour for a publication with 'all tables,
all sequences' having such WITH options for both CREATE and ALTER-PUB.
The options are valid for tables but not for sequences. In such a
case, giving an error might not be correct. For this particular case,
we can give a NOTICE saying options not applicable to sequences.
Thoughts?

thanks
Shveta

#281vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#279)
6 attachment(s)
Re: Logical Replication of sequences

On Mon, 21 Jul 2025 at 15:24, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Jul 21, 2025 at 2:36 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 21 Jul 2025 at 11:15, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Jul 21, 2025 at 10:36 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I was just trying a different test, so I realized that ALTER
PUBLICATION ADD SEQUENCE is not supported, any reason for the same?

postgres[154731]=# ALTER PUBLICATION pub ADD sequence s1;
ERROR: 42601: invalid publication object list
LINE 1: ALTER PUBLICATION pub ADD sequence s1;
DETAIL: One of TABLE or TABLES IN SCHEMA must be specified before a
standalone table or schema name.
LOCATION: preprocess_pubobj_list, gram.y:19685

Also I noticed that
1. We don't allow creating publication with individual sequences (e.g.
CREATE PUBLICATION pub FOR SEQUENCE s1;). Is it because the main
purpose of this sync is major version upgrade and we do not have
scenarios for replicating a few sequences or there are some technical
difficulties in achieving that or both.

There are no technical difficulties here. The main goal was to support
all sequences necessary for the upgrade scenario. Once that is
complete, the implementation can be extended based on additional use
cases.

2. This syntax works (CREATE PUBLICATION pub FOR ALL TABLES,
SEQUENCES;) but tab completion doesn't suggest this

Nisha had analysed this and shared this earlier at [1]:
Tab-completion is not supported after a comma (,) in any other cases.
For example, the following commands are valid, but tab-completion does
not work after the comma:
CREATE PUBLICATION pub7 FOR TABLE t1, TABLES IN SCHEMA public;
CREATE PUBLICATION pub7 FOR TABLES IN SCHEMA public, TABLES IN SCHEMA schema2;

3. Some of the syntaxes works for sequence which doesn't make sense to
me, as listed below, I think there are more

postgres[154731]=# CREATE PUBLICATION insert_only FOR ALL SEQUENCES
WITH (publish = 'insert');
CREATE PUBLICATION

postgres[154731]=# CREATE PUBLICATION pub FOR ALL SEQUENCES WITH (
PUBLISH_VIA_PARTITION_ROOT );
CREATE PUBLICATION

There is a documentation for this at sql-createpublication.html:
WITH ( publication_parameter [= value] [, ... ] )
This clause specifies optional parameters for a publication when
publishing tables. This clause is not applicable for sequences.

I felt it was enough, should we do anything more here?

[1] - /messages/by-id/CABdArM5axwoTorZnJww5rE79SNzvnnXCfWkv7XJex1Rkz=JDog@mail.gmail.com

I meant the PUBLISH_VIA_PARTITION_ROOT and other options inside the
WITH() that are not applicable for SEQUENCES, so shall we throw an
error if we are only publishing sequences and using these options?

This is handled in the v20250723 version patch attached.

Also the comment from [1]/messages/by-id/CAJpy0uB0HmmiqrE5DrvH-jZPgSM7iiObGuTbg5JeOrEPT_5xPw@mail.gmail.com is addressed.
[1]: /messages/by-id/CAJpy0uB0HmmiqrE5DrvH-jZPgSM7iiObGuTbg5JeOrEPT_5xPw@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250723-0001-Enhance-pg_get_sequence_data-function.patchapplication/octet-stream; name=v20250723-0001-Enhance-pg_get_sequence_data-function.patchDownload
From c33906d95c65ac43ee341268a7c324d2c404ac56 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sun, 20 Jul 2025 18:19:47 +0530
Subject: [PATCH v20250723 1/6] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 +++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 47 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index de5b5929ee0..5189dd754f7 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        is the current value of the sequence, <literal>is_called</literal>
+        indicates whether the sequence has been used, <literal>log_cnt</literal>
+        shows how many fetches remain before a new WAL record must be written,
+        and <literal>page_lsn</literal> is the page LSN of the sequence
+        relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..f5fa49517cf 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1778,15 +1779,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1801,6 +1803,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1816,11 +1822,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 3ee8fed7e53..9cc52a7c83f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..452e1f0cc28 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt 
+------------+-----------+---------
+         10 | t         |      32
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..b772d5166d5 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250723-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250723-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 136514882ac5afd581b8c58fdd439d26d19103f7 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 23 Jul 2025 11:34:50 +0530
Subject: [PATCH v20250723 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 ++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 387 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 14 files changed, 472 insertions(+), 118 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index dc3f9ed3fbf..ec46b126304 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1062,6 +1062,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1334,3 +1370,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b6ba367b877..6cef3b9c27e 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -463,7 +464,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -500,7 +503,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -512,8 +516,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -525,12 +543,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -539,6 +567,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -561,9 +592,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index f6eca09ee15..a0b1a0ef56f 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index cd6c3684482..5e412df02f6 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,6 +107,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -715,6 +717,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating a
+	 * replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -726,9 +736,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -743,6 +750,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -758,13 +769,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -781,6 +795,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -804,7 +824,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -843,18 +863,55 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -862,10 +919,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -885,16 +949,23 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -907,22 +978,19 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->retaindeadtuples, sub->origin,
+									  subrel_local_oids, subrel_count,
+									  sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -931,12 +999,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -947,28 +1016,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -990,41 +1079,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1033,10 +1136,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1050,11 +1153,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1538,8 +1643,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1553,7 +1658,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1593,8 +1699,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1612,18 +1718,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1635,8 +1742,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1648,12 +1755,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1931,7 +2052,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2246,16 +2367,16 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * 	  statements with "retain_dead_tuples = true" and "origin = any", and for
+ * 	  ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
  *    when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
@@ -2314,24 +2435,28 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION, subrel_local_oids
+	 * contains the list of relation oids that are already present on the
+	 * subscriber. This check should be skipped for these tables if checking
+	 * for table sync scenario. However, when handling the retain_dead_tuples
+	 * scenario, ensure all tables are checked, as some existing tables may now
+	 * include changes from other origins due to newly created subscriptions on
+	 * the publisher.
 	 */
 	if (check_table_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								schemaname, tablename);
+			}
 		}
 	}
 
@@ -2611,6 +2736,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f262e7a66f7..b58e81424ab 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9888e33d8df..25cf78e0e30 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10958,11 +10958,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 7af28109e16..4b50c0a4b6d 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2309,7 +2309,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9cc52a7c83f..3fe60ae82cd 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..1af265aa174 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..c2e9583cdb7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index dce8c672b40..8b2c407ccdb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2175,6 +2183,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2185,7 +2194,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index a98c97f7616..629e2617f63 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250723-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250723-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 4bc47a6adbadfe0c5576c69047479cd1eb2b4d78 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sun, 20 Jul 2025 18:23:41 +0530
Subject: [PATCH v20250723 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   4 +-
 src/backend/commands/publicationcmds.c    | 112 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   8 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 564 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 773 insertions(+), 382 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..dc3f9ed3fbf 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -134,7 +134,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1084,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..a599b7c69a4 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,14 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -881,11 +884,30 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_generated_columns_given,
 							  &publish_generated_columns);
 
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters do not affect sequence synchronization"));
+	}
+
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
-	values[Anum_pg_publication_puballtables - 1] =
-		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballtables - 1] = stmt->for_all_tables;
+	values[Anum_pg_publication_puballsequences - 1] = stmt->for_all_sequences;
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -918,7 +940,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1000,6 +1022,25 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters do not affect sequence synchronization"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1440,6 +1481,8 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
+
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1495,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -1902,6 +1952,13 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
 					errcode(ERRCODE_SYNTAX_ERROR),
 					errmsg("column list must not be specified in ALTER PUBLICATION ... DROP"));
 
+		if (RelationGetForm(rel)->relkind == RELKIND_SEQUENCE)
+			ereport(ERROR,
+					errcode(ERRCODE_UNDEFINED_OBJECT),
+					errmsg("relation \"%s\" is not part of the publication",
+						   RelationGetRelationName(rel)),
+					errdetail_relkind_not_supported(RelationGetForm(rel)->relkind));
+
 		prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
 							   ObjectIdGetDatum(relid),
 							   ObjectIdGetDatum(pubid));
@@ -2019,19 +2076,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 73345bb3c70..9888e33d8df 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +450,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10677,7 +10683,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10697,13 +10708,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10815,6 +10827,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19627,6 +19661,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 6298edb26b5..be8097d1137 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4458,6 +4458,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4488,9 +4489,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4506,6 +4512,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4526,6 +4533,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4577,52 +4586,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 93a4475d51b..c1003b391a1 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -664,6 +664,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 6c7ec80e271..797fd1f7839 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3330,6 +3330,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7a06af48842..6c8563fa4a4 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index dbc586c5bc3..7af28109e16 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3576,12 +3576,12 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
-	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+		COMPLETE_WITH("TABLES", "SEQUENCES");
+	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
 		COMPLETE_WITH("IN SCHEMA");
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3a2eacd793f..e3f14f880fe 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +324,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +342,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +374,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +390,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +409,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +420,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +456,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +469,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +587,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +882,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1075,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1286,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1329,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1331,7 +1403,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1340,10 +1412,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1425,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1454,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1480,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1551,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1562,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1595,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1618,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1629,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1640,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1671,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1683,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1765,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1786,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1921,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1952,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c9e309190df..8ad72420970 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a8656419cb6..09dce70b6e8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2350,6 +2350,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250723-0005-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250723-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 40b9409e3686a6bc1cb7bba44187c0dc66633f77 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 23 Jul 2025 11:52:27 +0530
Subject: [PATCH v20250723 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 585 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 136 +++-
 src/backend/replication/logical/tablesync.c   |  98 +--
 src/backend/replication/logical/worker.c      |  69 ++-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  28 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 24 files changed, 1185 insertions(+), 154 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 6cef3b9c27e..7820d8ff5ee 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -338,7 +338,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index a0b1a0ef56f..a3095cb2da0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1405,6 +1405,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index f5fa49517cf..708306b3b1c 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1830,6 +1832,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 5e412df02f6..116c880b46a 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1094,7 +1094,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2029,7 +2029,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 742d9ba68e9..07301f1817b 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -243,19 +243,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -265,7 +264,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -326,6 +325,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -414,7 +414,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -503,8 +504,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -627,13 +636,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -700,7 +709,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -830,6 +839,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_start_time of the sequencesync worker in the subscription's
+ * apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -878,7 +906,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1244,7 +1272,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1545,7 +1573,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1585,6 +1613,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..329ce71d49c
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,585 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	check_and_launch_sync_worker(InvalidOid,
+								 &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			int64		last_value;
+			bool		is_called;
+			int64		log_cnt;
+			XLogRecPtr	page_lsn;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *subsequences;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	List	   *sequences_to_copy = NIL;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	subsequences = GetSubscriptionRelations(subid, false, true, true);
+
+	foreach_ptr(SubscriptionRelState, subseq, subsequences)
+	{
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(subseq->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subseq->relid;
+		seq_info->remote_seq_fetched = false;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	list_free_deep(sequences_to_copy);
+
+	if (!run_as_owner && (subsequences != NIL))
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..52d43fb0eae 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	logicalrep_reset_seqsync_start_time();
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +102,60 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a worker
+ * available and the retry interval has elapsed.
+ *
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ *
+ * The caller must hold LogicalRepWorkerLock before calling this function.
+ */
+void
+check_and_launch_sync_worker(Oid relid, TimestampTz *last_start_time)
+{
+	int			nsyncworkers;
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+	/* Now safe to release the LWLock */
+	LWLockRelease(LogicalRepWorkerLock);
+
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +163,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +176,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +203,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +231,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +240,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +253,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +282,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 1d504f2af28..f4696343e35 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -523,50 +523,16 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			}
 			else
 			{
-				/*
-				 * If there is no sync worker for this table yet, count
-				 * running sync workers for this subscription, while we have
-				 * the lock.
-				 */
-				int			nsyncworkers =
-					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool            found;
 
-				/* Now safe to release the LWLock */
-				LWLockRelease(LogicalRepWorkerLock);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				check_and_launch_sync_worker(rstate->relid,
+											 &hentry->last_start_time);
 			}
 		}
 	}
@@ -1249,7 +1215,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1522,7 +1488,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1567,7 +1534,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1575,7 +1542,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1589,23 +1556,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index cf9a286ee55..ea8d95d4e01 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -668,6 +668,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1208,7 +1213,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1330,7 +1338,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1386,7 +1397,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1452,7 +1466,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1587,7 +1604,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2429,7 +2449,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3939,7 +3962,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5118,7 +5144,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5238,8 +5265,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5342,6 +5369,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5361,14 +5392,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5439,6 +5472,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5451,9 +5488,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..80181825240 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 3fe60ae82cd..29f3cc6d1fb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 363c31ff1cf..26ed37ffef4 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -102,6 +103,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -251,6 +254,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -259,12 +263,15 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void check_and_launch_sync_worker(Oid relid, TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -274,11 +281,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -343,15 +351,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b8a89275f13
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8c59c8f0f10..9a331223170 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250723-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250723-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 7cb6490ddb3a08ca033f6d8ceeb368db8b666f66 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250723 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 13 files changed, 238 insertions(+), 194 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 63c2992d19f..b6ba367b877 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -489,13 +489,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 1fa931a7422..b0eb3967d8f 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3fea0a0206e..1d504f2af28 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -665,37 +613,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1332,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1573,77 +1490,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1729,7 +1575,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1747,7 +1593,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index b59221c4d06..cf9a286ee55 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1209,7 +1209,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1331,7 +1331,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1387,7 +1387,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1453,7 +1453,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1588,7 +1588,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2430,7 +2430,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3940,7 +3940,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5411,7 +5411,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index be8097d1137..1ea51f073e8 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5227,12 +5227,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5286,7 +5286,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index c1003b391a1..1e032392c1b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -822,6 +822,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 0c7b8440a61..363c31ff1cf 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -247,6 +247,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -270,9 +272,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 09dce70b6e8..8c59c8f0f10 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2914,7 +2914,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250723-0006-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250723-0006-Documentation-for-sequence-synchronization.patchDownload
From f4f1304bda9afff7c61d5052081e3c9b53d21103 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250723 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 462 insertions(+), 61 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 97f547b3cc4..d5acc7d9b0a 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8160,16 +8160,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8203,7 +8206,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8212,12 +8215,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 20ccb2d6b54..80dc1d785a4 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5190,9 +5190,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5333,8 +5333,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5357,10 +5357,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index fcac55aefe6..1de1c55341c 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,223 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, an ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are either recreated on
+    the publisher, dropped on the subscriber, or removed from the
+    synchronization list. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this issue, either recreate the missing sequence on the
+    publisher using <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>,
+    drop the sequences on the subscriber if they are no longer needed using
+    <link linkend="sql-dropsequence"><command>DROP SEQUENCE</command></link>,
+    or run <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+    remove these sequences from synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2072,16 +2293,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2405,8 +2629,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2419,8 +2643,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 823afe1b30b..a1a2be86d38 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index d48cdc76bd3..4922db489d3 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index b8cd15f3280..298480d38eb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#282Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#281)
RE: Logical Replication of sequences

Dear Vignesh,

Thanks for working the project. Here are my comments only for 0001 and 0002.
Sorry if my points have already been discussed, this thread is too huge to catchup for me :-(.

01.
Do we have to document the function and open to users? Previously it was not.
Another example is pg_get_publication_tables, which is used only by the backend.

02.
```
SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
```

I came up with the way that the function returns page_lsn, how do you feel?

```
SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
```

03.
Regarding the tab completion on psql. When I input till "CREATE PUBLICATION pub FOR ALL TABLES"
and push the tab, the word "WITH" was completed. But I'm afraid it may be too
much because users can input like "FOR ALL TABLES, ALL SEQUENCES". Should we
suggest the word ", ALL SEQUENCES" here or we can ignore?

Same point can be said when "FOR ALL SEQUENCES" was input.

04.
Same as above, "WITH" could be completed when I input till "FOR ALL SEQUENCES".
However, current patch rejects the WITH clause for ALL SEQUENCE publication.
How do you feel? I'm wondering we should stop the suggestion.

05.
is_publishable_class() has a comment atop the function:
```
* Does same checks as check_publication_add_relation() above, but does not
* need relation to be opened and also does not throw errors.
```

But this is not correct becasue check_publication_add_relation() does not allow
sequences. Can you modify the comment accordingly?

06.
```
values[Anum_pg_publication_puballtables - 1] = stmt->for_all_tables;
values[Anum_pg_publication_puballsequences - 1] = stmt->for_all_sequences;
```

They should be passed as Datum like others.

07.
```
+       /*
+        * indicates that this is special publication which should encompass all
+        * sequences in the database (except for the unlogged and temp ones)
+        */
+       bool            puballsequences;
```

Let me my thought here. I initially wondered whether we should synchronize
unlogged sequences, because it is actually possible. However, I found the article[1]https://www.crunchydata.com/blog/postgresql-unlogged-sequences#unlogged-sequences-in-postgres-have-no-performance-gain
that primal use-case of the unlogged sequence is to associate with the unlogged table.
Based on the point, I agree not to include such sequences - they might be a leaked
sequence.

08.
```
@@ -1902,6 +1952,13 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
errcode(ERRCODE_SYNTAX_ERROR),
errmsg("column list must not be specified in ALTER PUBLICATION ... DROP"));

+               if (RelationGetForm(rel)->relkind == RELKIND_SEQUENCE)
+                       ereport(ERROR,
+                                       errcode(ERRCODE_UNDEFINED_OBJECT),
+                                       errmsg("relation \"%s\" is not part of the publication",
+                                                  RelationGetRelationName(rel)),
+                                       errdetail_relkind_not_supported(RelationGetForm(rel)->relkind));
+
```

Hmm, I feel this is not needed because in the first place sequences cannot be
specified as the target. Other objects like view is not handled here.

09.
```
+ StringInfo pub_type;
+
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
```

I feel this blank is not needed.

10.
```
/*
* Process all_objects_list to set all_tables/all_sequences.
* Also, checks if the pub_object_type has been specified more than once.
*/
static void
preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
bool *all_sequences, core_yyscan_t yyscanner)
```

I'm not a native speaker, but I feel object list is "process"'d here, not
"preprocess"'d. Can we rename to process_pub_all_objtype_list?

11.
```
+       /* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+       if (!pubinfo->puballsequences || pubinfo->puballtables)
```

This meant that FOR ALL TABLES/SEQUENCES publication without any options can be
dumped like:

```
CREATE PUBLICATION pub FOR ALL TABLES, ALL SEQUENCES WITH (publish = 'insert, update, delete, truncate')
```

However, since the WITH clause is set with ALL SEQUENCE, generated script could
raise the WARNING, which may be confusing for users:

```
$ psql -U postgres -f dump.sql
...
psql:dump.sql:24: WARNING: WITH clause parameters do not affect sequence synchronization
```

One workaround for it is not to output WITH cause for default setting. Thought?

[1]: https://www.crunchydata.com/blog/postgresql-unlogged-sequences#unlogged-sequences-in-postgres-have-no-performance-gain

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#283vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#282)
6 attachment(s)
Re: Logical Replication of sequences

On Thu, 24 Jul 2025 at 13:12, Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Dear Vignesh,

Thanks for working the project. Here are my comments only for 0001 and 0002.
Sorry if my points have already been discussed, this thread is too huge to catchup for me :-(.

01.
Do we have to document the function and open to users? Previously it was not.
Another example is pg_get_publication_tables, which is used only by the backend.

Now we are exposing this function to the user, this LSN value will be
used by the user to find out if the sequence has changed compared to
the subscriber and decide if it should be synchronized or not.

02.
```
SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
```

I came up with the way that the function returns page_lsn, how do you feel?

```
SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
```

Modified

03.
Regarding the tab completion on psql. When I input till "CREATE PUBLICATION pub FOR ALL TABLES"
and push the tab, the word "WITH" was completed. But I'm afraid it may be too
much because users can input like "FOR ALL TABLES, ALL SEQUENCES". Should we
suggest the word ", ALL SEQUENCES" here or we can ignore?

Same point can be said when "FOR ALL SEQUENCES" was input.

This was discussed earlier at [1]/messages/by-id/CABdArM5axwoTorZnJww5rE79SNzvnnXCfWkv7XJex1Rkz=JDog@mail.gmail.com:
Tab-completion is not supported after a comma (,) in any other cases.
For example, the following commands are valid, but tab-completion does
not work after the comma:
CREATE PUBLICATION pub7 FOR TABLE t1, TABLES IN SCHEMA public;
CREATE PUBLICATION pub7 FOR TABLES IN SCHEMA public, TABLES IN SCHEMA schema2;

04.
Same as above, "WITH" could be completed when I input till "FOR ALL SEQUENCES".
However, current patch rejects the WITH clause for ALL SEQUENCE publication.
How do you feel? I'm wondering we should stop the suggestion.

Modified

05.
is_publishable_class() has a comment atop the function:
```
* Does same checks as check_publication_add_relation() above, but does not
* need relation to be opened and also does not throw errors.
```

But this is not correct becasue check_publication_add_relation() does not allow
sequences. Can you modify the comment accordingly?

Modified

06.
```
values[Anum_pg_publication_puballtables - 1] = stmt->for_all_tables;
values[Anum_pg_publication_puballsequences - 1] = stmt->for_all_sequences;
```

They should be passed as Datum like others.

Modified

07.
```
+       /*
+        * indicates that this is special publication which should encompass all
+        * sequences in the database (except for the unlogged and temp ones)
+        */
+       bool            puballsequences;
```

Let me my thought here. I initially wondered whether we should synchronize
unlogged sequences, because it is actually possible. However, I found the article[1]
that primal use-case of the unlogged sequence is to associate with the unlogged table.
Based on the point, I agree not to include such sequences - they might be a leaked
sequence.

I agree

08.
```
@@ -1902,6 +1952,13 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
errcode(ERRCODE_SYNTAX_ERROR),
errmsg("column list must not be specified in ALTER PUBLICATION ... DROP"));

+               if (RelationGetForm(rel)->relkind == RELKIND_SEQUENCE)
+                       ereport(ERROR,
+                                       errcode(ERRCODE_UNDEFINED_OBJECT),
+                                       errmsg("relation \"%s\" is not part of the publication",
+                                                  RelationGetRelationName(rel)),
+                                       errdetail_relkind_not_supported(RelationGetForm(rel)->relkind));
+
```

Hmm, I feel this is not needed because in the first place sequences cannot be
specified as the target. Other objects like view is not handled here.

Modified

09.
```
+ StringInfo pub_type;
+
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
```

I feel this blank is not needed.

Modified

10.
```
/*
* Process all_objects_list to set all_tables/all_sequences.
* Also, checks if the pub_object_type has been specified more than once.
*/
static void
preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
bool *all_sequences, core_yyscan_t yyscanner)
```

I'm not a native speaker, but I feel object list is "process"'d here, not
"preprocess"'d. Can we rename to process_pub_all_objtype_list?

Since we have an existing function preprocess_pubobj_list which uses
preprocess, keeping it as preprocess to maintain consistency

11.
```
+       /* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+       if (!pubinfo->puballsequences || pubinfo->puballtables)
```

This meant that FOR ALL TABLES/SEQUENCES publication without any options can be
dumped like:

```
CREATE PUBLICATION pub FOR ALL TABLES, ALL SEQUENCES WITH (publish = 'insert, update, delete, truncate')
```

However, since the WITH clause is set with ALL SEQUENCE, generated script could
raise the WARNING, which may be confusing for users:

```
$ psql -U postgres -f dump.sql
...
psql:dump.sql:24: WARNING: WITH clause parameters do not affect sequence synchronization
```

One workaround for it is not to output WITH cause for default setting. Thought?

I felt this is ok, it will make users aware that for sequence this is
not applicable.

The attached v20250724 version patch has the changes for the same.
[1]: /messages/by-id/CABdArM5axwoTorZnJww5rE79SNzvnnXCfWkv7XJex1Rkz=JDog@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250724-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250724-0001-Enhance-pg_get_sequence_data-function.patchDownload
From 6bf61aada04db04227af4f4030df05bac2cf1618 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sun, 20 Jul 2025 18:19:47 +0530
Subject: [PATCH v20250724] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 +++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 47 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index de5b5929ee0..5189dd754f7 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        is the current value of the sequence, <literal>is_called</literal>
+        indicates whether the sequence has been used, <literal>log_cnt</literal>
+        shows how many fetches remain before a new WAL record must be written,
+        and <literal>page_lsn</literal> is the page LSN of the sequence
+        relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..f5fa49517cf 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1778,15 +1779,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1801,6 +1803,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1816,11 +1822,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 3ee8fed7e53..9cc52a7c83f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250724-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250724-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From c3fea07a10fc3ac619244a3c5da7d4017c41b621 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250724 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 13 files changed, 238 insertions(+), 194 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 63c2992d19f..b6ba367b877 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -489,13 +489,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 1fa931a7422..b0eb3967d8f 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3fea0a0206e..1d504f2af28 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -665,37 +613,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1332,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1573,77 +1490,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1729,7 +1575,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1747,7 +1593,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index b59221c4d06..cf9a286ee55 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1209,7 +1209,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1331,7 +1331,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1387,7 +1387,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1453,7 +1453,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1588,7 +1588,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2430,7 +2430,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3940,7 +3940,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5411,7 +5411,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..6dc46a78af2 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,8 +243,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index be8097d1137..1ea51f073e8 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5227,12 +5227,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5286,7 +5286,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index c1003b391a1..1e032392c1b 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -822,6 +822,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 0c7b8440a61..363c31ff1cf 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -247,6 +247,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -270,9 +272,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 0701234b313..4d76bedf9dc 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2915,7 +2915,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250724-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250724-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 1cfffb1590e0efe7c7bc09315318fe7990ef6f54 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 23 Jul 2025 11:34:50 +0530
Subject: [PATCH v20250724 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  82 +++++
 src/backend/catalog/pg_subscription.c       |  61 ++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 387 +++++++++++++++-----
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   1 +
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |  11 +-
 src/test/regress/expected/subscription.out  |   4 +-
 14 files changed, 472 insertions(+), 118 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d7d33c8b709..f4b5b7915bb 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1065,6 +1065,42 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
 	return result;
 }
 
+/*
+ * Gets list of all relations published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+	Relation	classRel;
+	ScanKeyData key[1];
+	TableScanDesc scan;
+	HeapTuple	tuple;
+	List	   *result = NIL;
+
+	classRel = table_open(RelationRelationId, AccessShareLock);
+
+	ScanKeyInit(&key[0],
+				Anum_pg_class_relkind,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(RELKIND_SEQUENCE));
+
+	scan = table_beginscan_catalog(classRel, 1, key);
+
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+		Oid			relid = relForm->oid;
+
+		if (is_publishable_class(relid, relForm))
+			result = lappend_oid(result, relid);
+	}
+
+	table_endscan(scan);
+
+	table_close(classRel, AccessShareLock);
+	return result;
+}
+
 /*
  * Get publication using oid
  *
@@ -1337,3 +1373,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllSequencesPublicationRelations();
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b6ba367b877..6cef3b9c27e 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_lsn.h"
 #include "utils/rel.h"
@@ -463,7 +464,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -500,7 +503,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -512,8 +516,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -525,12 +543,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -539,6 +567,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -561,9 +592,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		bool		issequence;
+		bool		istable;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
+		istable = !issequence;
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && issequence)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && istable)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index f6eca09ee15..a0b1a0ef56f 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index cd6c3684482..5e412df02f6 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,6 +107,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -715,6 +717,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: If the subscription is for a sequence-only publication, creating a
+	 * replication origin is unnecessary because incremental synchronization
+	 * of sequences is not supported, and sequence data is fully synced during
+	 * a REFRESH, which does not rely on the origin. If the publication is
+	 * later modified to include tables, the origin can be created during the
+	 * ALTER SUBSCRIPTION ... REFRESH command.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -726,9 +736,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -743,6 +750,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		table_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -758,13 +769,16 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -781,6 +795,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: If the subscription is for a sequence-only publication,
+			 * creating this slot is unnecessary. It can be created later
+			 * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+			 * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+			 * publication is updated to include tables.
 			 */
 			if (opts.create_slot)
 			{
@@ -804,7 +824,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -843,18 +863,55 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -862,10 +919,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+#ifdef USE_ASSERT_CHECKING
+	/* Sanity checks for parameter values */
+	if (resync_all_sequences)
+		Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -885,16 +949,23 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -907,22 +978,19 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->retaindeadtuples, sub->origin,
+									  subrel_local_oids, subrel_count,
+									  sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -931,12 +999,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -947,28 +1016,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -990,41 +1079,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1033,10 +1136,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1050,11 +1153,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1538,8 +1643,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1553,7 +1658,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1593,8 +1699,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1612,18 +1718,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1635,8 +1742,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1648,12 +1755,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1931,7 +2052,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2246,16 +2367,16 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * 	  statements with "retain_dead_tuples = true" and "origin = any", and for
+ * 	  ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
  *    when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
@@ -2314,24 +2435,28 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION, subrel_local_oids
+	 * contains the list of relation oids that are already present on the
+	 * subscriber. This check should be skipped for these tables if checking
+	 * for table sync scenario. However, when handling the retain_dead_tuples
+	 * scenario, ensure all tables are checked, as some existing tables may now
+	 * include changes from other origins due to newly created subscriptions on
+	 * the publisher.
 	 */
 	if (check_table_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								schemaname, tablename);
+			}
 		}
 	}
 
@@ -2611,6 +2736,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f262e7a66f7..b58e81424ab 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9888e33d8df..25cf78e0e30 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10958,11 +10958,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 69a91826254..f452ea1542a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2309,7 +2309,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9cc52a7c83f..3fe60ae82cd 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..1af265aa174 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -171,6 +171,7 @@ typedef enum PublicationPartOpt
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
 extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..c2e9583cdb7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index dce8c672b40..8b2c407ccdb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
@@ -2175,6 +2183,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2185,7 +2194,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index a98c97f7616..629e2617f63 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250724-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250724-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 71b3778f8e1bde98236fd140c422d61070b33349 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sun, 20 Jul 2025 18:23:41 +0530
Subject: [PATCH v20250724 2/2] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 102 ++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 564 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  37 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 767 insertions(+), 379 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..d7d33c8b709 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..b4e45fb647f 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,14 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -881,11 +884,32 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_generated_columns_given,
 							  &publish_generated_columns);
 
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters do not affect sequence synchronization"));
+	}
+
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -918,7 +942,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1000,6 +1024,25 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters do not affect sequence synchronization"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1440,6 +1483,7 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1496,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -2019,19 +2070,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 73345bb3c70..9888e33d8df 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +450,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10677,7 +10683,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10697,13 +10708,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10815,6 +10827,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19627,6 +19661,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 6298edb26b5..be8097d1137 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4458,6 +4458,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4488,9 +4489,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4506,6 +4512,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4526,6 +4533,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4577,52 +4586,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 93a4475d51b..c1003b391a1 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -664,6 +664,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 6c7ec80e271..797fd1f7839 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3330,6 +3330,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7a06af48842..6c8563fa4a4 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index dbc586c5bc3..69a91826254 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3576,11 +3576,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3a2eacd793f..e3f14f880fe 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,96 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +324,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +342,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +374,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +390,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +409,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +420,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +456,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +469,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +587,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +882,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1075,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1286,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1329,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1331,7 +1403,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1340,10 +1412,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1425,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1454,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1480,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1551,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1562,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1583,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1595,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1618,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1629,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1640,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1671,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1683,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1765,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1786,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1921,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1952,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c9e309190df..8ad72420970 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,43 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 4353befab99..0701234b313 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2351,6 +2351,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250724-0005-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250724-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 626b9de6ce393e23453414300ca6d3ca14d37553 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 23 Jul 2025 11:52:27 +0530
Subject: [PATCH v20250724 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 585 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 136 +++-
 src/backend/replication/logical/tablesync.c   |  98 +--
 src/backend/replication/logical/worker.c      |  69 ++-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  28 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 24 files changed, 1185 insertions(+), 154 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 6cef3b9c27e..7820d8ff5ee 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -338,7 +338,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index a0b1a0ef56f..a3095cb2da0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1405,6 +1405,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index f5fa49517cf..708306b3b1c 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1830,6 +1832,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 5e412df02f6..116c880b46a 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1094,7 +1094,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2029,7 +2029,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 742d9ba68e9..07301f1817b 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -243,19 +243,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -265,7 +264,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -326,6 +325,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -414,7 +414,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -503,8 +504,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -627,13 +636,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -700,7 +709,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -830,6 +839,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_start_time of the sequencesync worker in the subscription's
+ * apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -878,7 +906,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1244,7 +1272,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1545,7 +1573,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1585,6 +1613,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..329ce71d49c
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,585 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	check_and_launch_sync_worker(InvalidOid,
+								 &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			int64		last_value;
+			bool		is_called;
+			int64		log_cnt;
+			XLogRecPtr	page_lsn;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *subsequences;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	List	   *sequences_to_copy = NIL;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	subsequences = GetSubscriptionRelations(subid, false, true, true);
+
+	foreach_ptr(SubscriptionRelState, subseq, subsequences)
+	{
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(subseq->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subseq->relid;
+		seq_info->remote_seq_fetched = false;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	list_free_deep(sequences_to_copy);
+
+	if (!run_as_owner && (subsequences != NIL))
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..52d43fb0eae 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	logicalrep_reset_seqsync_start_time();
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +102,60 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a worker
+ * available and the retry interval has elapsed.
+ *
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ *
+ * The caller must hold LogicalRepWorkerLock before calling this function.
+ */
+void
+check_and_launch_sync_worker(Oid relid, TimestampTz *last_start_time)
+{
+	int			nsyncworkers;
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+	/* Now safe to release the LWLock */
+	LWLockRelease(LogicalRepWorkerLock);
+
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +163,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +176,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +203,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +231,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +240,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +253,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +282,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 1d504f2af28..f4696343e35 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -523,50 +523,16 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			}
 			else
 			{
-				/*
-				 * If there is no sync worker for this table yet, count
-				 * running sync workers for this subscription, while we have
-				 * the lock.
-				 */
-				int			nsyncworkers =
-					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool            found;
 
-				/* Now safe to release the LWLock */
-				LWLockRelease(LogicalRepWorkerLock);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				check_and_launch_sync_worker(rstate->relid,
+											 &hentry->last_start_time);
 			}
 		}
 	}
@@ -1249,7 +1215,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1522,7 +1488,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1567,7 +1534,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1575,7 +1542,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1589,23 +1556,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index cf9a286ee55..ea8d95d4e01 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -668,6 +668,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			Assert(0);
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1208,7 +1213,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1330,7 +1338,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1386,7 +1397,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1452,7 +1466,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1587,7 +1604,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2429,7 +2449,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3939,7 +3962,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5118,7 +5144,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5238,8 +5265,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5342,6 +5369,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5361,14 +5392,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5439,6 +5472,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5451,9 +5488,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..80181825240 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 3fe60ae82cd..29f3cc6d1fb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 363c31ff1cf..26ed37ffef4 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -102,6 +103,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -251,6 +254,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -259,12 +263,15 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void check_and_launch_sync_worker(Oid relid, TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -274,11 +281,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -343,15 +351,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b8a89275f13
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 4d76bedf9dc..f559b628a5b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1628,6 +1628,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250724-0006-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250724-0006-Documentation-for-sequence-synchronization.patchDownload
From 71ff5f98185d0bfdff6cc296ccf2dfd4c5433c67 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250724 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 462 insertions(+), 61 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 97f547b3cc4..d5acc7d9b0a 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8160,16 +8160,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8203,7 +8206,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8212,12 +8215,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 20ccb2d6b54..80dc1d785a4 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5190,9 +5190,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5333,8 +5333,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5357,10 +5357,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index fcac55aefe6..1de1c55341c 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,223 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, an ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are either recreated on
+    the publisher, dropped on the subscriber, or removed from the
+    synchronization list. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this issue, either recreate the missing sequence on the
+    publisher using <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>,
+    drop the sequences on the subscriber if they are no longer needed using
+    <link linkend="sql-dropsequence"><command>DROP SEQUENCE</command></link>,
+    or run <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+    remove these sequences from synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2072,16 +2293,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2405,8 +2629,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2419,8 +2643,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 823afe1b30b..a1a2be86d38 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index d48cdc76bd3..4922db489d3 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index b8cd15f3280..298480d38eb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#284Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Hayato Kuroda (Fujitsu) (#282)
RE: Logical Replication of sequences

Dear Vignesh,

Here are remained comments for v20250723 0003-0005. I've not checked the latest version.

01.
```
* PostgreSQL logical replication: common synchronization code
```

How about: "Common code for synchronizations"? Since this file locates in
replication/logical, initial part is bit trivial.

02.

How do you feel to separate header file into syncutils.h file? We can put some
definitions needed for synchronizations.

03.
```
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
```

Assuming that this function obtains both tables and sequences. I'm now wondering
we can say "relation" for both the tables and sequences in the context. E.g.,
getTableData()->makeTableDataInfo() seems to obtain both table and sequence data,
and dumpSequenceData() dumps sequences. How about keep getSubscriptionTables, or
getSubscriptionTablesAndSequences?

04.
```
/*
* dumpSubscriptionTable
* Dump the definition of the given subscription table mapping. This will be
* used only in binary-upgrade mode for PG17 or later versions.
*/
static void
dumpSubscriptionTable(Archive *fout, const SubRelInfo *subrinfo)
```

If you rename getSubscriptionTables, dumpSubscriptionTable should be also renamed.

05.
```
/*
* Gets list of all relations published by FOR ALL SEQUENCES publication(s).
*/
List *
GetAllSequencesPublicationRelations(void)
```

It looks very similar with GetAllTablesPublicationRelations(). Can we combine them?
I feel we can pass the kind of target relation and pubviaroot.

06.
```
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/memutils.h"
```

I can build without the header.

07.
```
+       /*
+        * XXX: If the subscription is for a sequence-only publication, creating a
+        * replication origin is unnecessary because incremental synchronization
+        * of sequences is not supported, and sequence data is fully synced during
+        * a REFRESH, which does not rely on the origin. If the publication is
+        * later modified to include tables, the origin can be created during the
+        * ALTER SUBSCRIPTION ... REFRESH command.
+        */
        ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
        replorigin_create(originname);
```

The comment is bit misleading because currently we create the replicaton origin
in the case. Can you clarify the point like:
```
XXX: Now the replication origin is created for all the cases, but it is unnecessary
when the subcription is for a sequence-only publicaiton....
```

08.
```
+                        * XXX: If the subscription is for a sequence-only publication,
+                        * creating this slot is unnecessary. It can be created later
+                        * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+                        * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+                        * publication is updated to include tables.
```

Same as above.

09.
```
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true
```

IIUC assertions are rarely described the comment stop function. If you want to
add, we should say like:
```
In Assert enabled builds, we verify that parameters are passed correctly...
```

10.
```
+#ifdef USE_ASSERT_CHECKING
+       /* Sanity checks for parameter values */
+       if (resync_all_sequences)
+               Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
```

How about below, which does not require ifdef.

```
Assert(!resync_all_sequences ||
(copy_data && !refresh_tables && refresh_sequences));

```

11.
```
+               bool            issequence;
+               bool            istable;
```

Isn't it enough to use istable?

12.
```
+               /* Relation is either a sequence or a table */
+               issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
```

How about adding an Assert() to ensure the relation is either of table or sequence?

13.
```
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
```
Two parts decribe mostly same point. How about:
If true, this function returns only the relations that are not in a ready state.
Otherwise returns all the relations of the subscription.

14.
```
+ char table_state;
```

It should be `relation_state`.

15.
```
+#define SEQ_LOG_CNT_INVALID 0
```

Can you add comment how we use it?

16.
```
+
+       TimestampTz last_seqsync_start_time;
```

I can't find the user of this attribute, is it needed?

17.
```
+                               FetchRelationStates(&has_pending_sequences);
+                               ProcessSyncingTablesForApply(current_lsn);
+                               if (has_pending_sequences)
+                                       ProcessSyncingSequencesForApply();
```

IIUC we do not always call ProcessSyncingSequencesForApply() because it would acquire
the LW lock. Can you clariy it as comments?

18.
```
+               case WORKERTYPE_SEQUENCESYNC:
+                       /* Should never happen. */
+                       Assert(0);
```

Should we call elog(ERROR) instead of Assert(0) like another case?

19.
```
/* Find the leader apply worker and signal it. */
logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
```

Do we have to signal to the leader even when the sequence worker exits?

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#285vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#284)
6 attachment(s)
Re: Logical Replication of sequences

On Mon, 28 Jul 2025 at 13:26, Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Dear Vignesh,

Here are remained comments for v20250723 0003-0005. I've not checked the latest version.

01.
```
* PostgreSQL logical replication: common synchronization code
```

How about: "Common code for synchronizations"? Since this file locates in
replication/logical, initial part is bit trivial.

I felt this is ok as tablesync.c has a similar file header.

02.

How do you feel to separate header file into syncutils.h file? We can put some
definitions needed for synchronizations.

Currently we only have 5 functions here, we can add it if more
functions get added here.

03.
```
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
```

Assuming that this function obtains both tables and sequences. I'm now wondering
we can say "relation" for both the tables and sequences in the context. E.g.,
getTableData()->makeTableDataInfo() seems to obtain both table and sequence data,
and dumpSequenceData() dumps sequences. How about keep getSubscriptionTables, or
getSubscriptionTablesAndSequences?

Retained it to getSubscriptionTables as earlier. As in case of pg_dump
code it is used like that, that is function having table name not only
handle table but other relation object too.

04.
```
/*
* dumpSubscriptionTable
* Dump the definition of the given subscription table mapping. This will be
* used only in binary-upgrade mode for PG17 or later versions.
*/
static void
dumpSubscriptionTable(Archive *fout, const SubRelInfo *subrinfo)
```

If you rename getSubscriptionTables, dumpSubscriptionTable should be also renamed.

Since we are not renaming getSubscriptionTables, nothing to do here

05.
```
/*
* Gets list of all relations published by FOR ALL SEQUENCES publication(s).
*/
List *
GetAllSequencesPublicationRelations(void)
```

It looks very similar with GetAllTablesPublicationRelations(). Can we combine them?
I feel we can pass the kind of target relation and pubviaroot.

Modified

06.
```
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -27,6 +27,7 @@
#include "utils/array.h"
#include "utils/builtins.h"
#include "utils/fmgroids.h"
+#include "utils/memutils.h"
```

I can build without the header.

Removed this

07.
```
+       /*
+        * XXX: If the subscription is for a sequence-only publication, creating a
+        * replication origin is unnecessary because incremental synchronization
+        * of sequences is not supported, and sequence data is fully synced during
+        * a REFRESH, which does not rely on the origin. If the publication is
+        * later modified to include tables, the origin can be created during the
+        * ALTER SUBSCRIPTION ... REFRESH command.
+        */
ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
replorigin_create(originname);
```

The comment is bit misleading because currently we create the replicaton origin
in the case. Can you clarify the point like:
```
XXX: Now the replication origin is created for all the cases, but it is unnecessary
when the subcription is for a sequence-only publicaiton....
```

Modified

08.
```
+                        * XXX: If the subscription is for a sequence-only publication,
+                        * creating this slot is unnecessary. It can be created later
+                        * during the ALTER SUBSCRIPTION ... REFRESH PUBLICATION or ALTER
+                        * SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES command, if the
+                        * publication is updated to include tables.
```

Same as above.

Modified

09.
```
+ *    Assert copy_data is true.
+ *    Assert refresh_tables is false.
+ *    Assert refresh_sequences is true
```

IIUC assertions are rarely described the comment stop function. If you want to
add, we should say like:
```
In Assert enabled builds, we verify that parameters are passed correctly...
```

Removed these

10.
```
+#ifdef USE_ASSERT_CHECKING
+       /* Sanity checks for parameter values */
+       if (resync_all_sequences)
+               Assert(copy_data && !refresh_tables && refresh_sequences);
+#endif
```

How about below, which does not require ifdef.

```
Assert(!resync_all_sequences ||
(copy_data && !refresh_tables && refresh_sequences));

```

Modified

11.
```
+               bool            issequence;
+               bool            istable;
```

Isn't it enough to use istable?

Removed both of them add added relkind variable, which will also help
next comment

12.
```
+               /* Relation is either a sequence or a table */
+               issequence = get_rel_relkind(subrel->srrelid) == RELKIND_SEQUENCE;
```

How about adding an Assert() to ensure the relation is either of table or sequence?

Modified

13.
```
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
```
Two parts decribe mostly same point. How about:
If true, this function returns only the relations that are not in a ready state.
Otherwise returns all the relations of the subscription.

Since we have get_tables and get_sequences, the existing version is more clear

14.
```
+ char table_state;
```

It should be `relation_state`.

Modified

15.
```
+#define SEQ_LOG_CNT_INVALID 0
```

Can you add comment how we use it?

We have the following comment in the SetSequence function:
* log_cnt is currently used only by the sequence syncworker to set the
* log_cnt for sequences while synchronizing values from the publisher.

Here all others other than sequence syncworker will pass
SEQ_LOG_CNT_INVALID to set log_cnt to 0.

I felt this is enough.

16.
```
+
+       TimestampTz last_seqsync_start_time;
```

I can't find the user of this attribute, is it needed?

This is used in check_and_launch_sync_worker to check in case of
failures if it has elapsed the wal_retrieve_retry_interval and start
the sequence sync worker.

17.
```
+                               FetchRelationStates(&has_pending_sequences);
+                               ProcessSyncingTablesForApply(current_lsn);
+                               if (has_pending_sequences)
+                                       ProcessSyncingSequencesForApply();
```

IIUC we do not always call ProcessSyncingSequencesForApply() because it would acquire
the LW lock. Can you clariy it as comments?

FetchRelationStates will indicate if there are any sequences to be
synchronized or not by setting has_pending_sequences. If there are no
sequences to be synchronized, no point in calling
ProcessSyncingSequencesForApply. I felt no need to add any comments
for this. Thoughts?

18.
```
+               case WORKERTYPE_SEQUENCESYNC:
+                       /* Should never happen. */
+                       Assert(0);
```

Should we call elog(ERROR) instead of Assert(0) like another case?

Modified

19.
```
/* Find the leader apply worker and signal it. */
logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
```

Do we have to signal to the leader even when the sequence worker exits?

Consider the case when the last run apply worker could not allocate an
tablesync worker because there was no worker that time. Now after the
sequence sync worker signals apply worker, apply worker can check and
see if it can be alloted for it, also it can check for any new
sequences that need to be synced because of refresh publication.

Thanks for the comments, the attached v20250728 version patch has the
changes for the same.

Regards,
Vignesh

Attachments:

v20250728-0001-Enhance-pg_get_sequence_data-function.patchapplication/octet-stream; name=v20250728-0001-Enhance-pg_get_sequence_data-function.patchDownload
From d74e74aaaac6193ba309753646d3c1a3ff1aecc0 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sun, 20 Jul 2025 18:19:47 +0530
Subject: [PATCH v20250728 1/6] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 +++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 47 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index de5b5929ee0..5189dd754f7 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19945,6 +19945,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        is the current value of the sequence, <literal>is_called</literal>
+        indicates whether the sequence has been used, <literal>log_cnt</literal>
+        shows how many fetches remain before a new WAL record must be written,
+        and <literal>page_lsn</literal> is the page LSN of the sequence
+        relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..f5fa49517cf 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1778,15 +1779,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1801,6 +1803,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1816,11 +1822,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 3ee8fed7e53..9cc52a7c83f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250728-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250728-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 12320a613649470bc85e166df9db96822d7f1bc3 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 23 Jul 2025 11:34:50 +0530
Subject: [PATCH v20250728 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch introduce a new command to synchronize the sequences of
a subscription:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  64 +++-
 src/backend/catalog/pg_subscription.c       |  60 ++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 382 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |   8 +
 src/test/regress/expected/subscription.out  |   4 +-
 14 files changed, 438 insertions(+), 126 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d7d33c8b709..3f15ccaac04 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,7 +777,7 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
  * should use GetAllTablesPublicationRelations().
  */
 List *
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllTablesPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllTablesPublicationRelations(RELKIND_RELATION,
+																   pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,50 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllTablesPublicationRelations(RELKIND_SEQUENCE,
+														 false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b6ba367b877..dabd87a622d 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -463,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -500,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -512,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -525,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -539,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -561,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index f6eca09ee15..a0b1a0ef56f 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index cd6c3684482..25ca2898be4 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,6 +107,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -715,6 +717,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -726,9 +734,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -743,6 +748,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -755,16 +764,19 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -773,7 +785,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				CheckSubscriptionRelkind(get_rel_relkind(relid),
 										 rv->schemaname, rv->relname);
 
-				AddSubscriptionRelState(subid, relid, table_state,
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -781,6 +793,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -804,7 +821,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -843,18 +860,52 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'refresh_tables' is true, update the subscription by adding or removing
+ * tables that have been added or removed since the last subscription creation
+ * or refresh publication.
+ *
+ * If 'refresh_sequences' is true, update the subscription by adding or removing
+ * sequences that have been added or removed since the last subscription
+ * creation or refresh publication.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool refresh_tables,
+						  bool refresh_sequences, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -862,10 +913,14 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
+	Assert(!resync_all_sequences ||
+		   (copy_data && !refresh_tables && refresh_sequences));
+
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
 
@@ -885,16 +940,23 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get the sequence list from publisher. */
+		if (refresh_sequences)
+			pubrel_names = list_concat(pubrel_names,
+									   fetch_sequence_list(wrconn,
+														   sub->publications));
+
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, refresh_sequences, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -907,22 +969,19 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->retaindeadtuples, sub->origin,
+									  subrel_local_oids, subrel_count,
+									  sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -931,12 +990,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -947,28 +1007,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -990,41 +1070,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1033,10 +1127,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1050,11 +1144,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1538,8 +1634,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1553,7 +1649,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, true, true,
+											  false);
 				}
 
 				break;
@@ -1593,8 +1690,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1612,18 +1709,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, true, true,
+											  false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1635,8 +1733,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1648,12 +1746,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, true, true, false);
+
+				break;
+			}
+
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				AlterSubscription_refresh(sub, true, NULL, false, true, true);
 
 				break;
 			}
@@ -1931,7 +2043,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2246,16 +2358,16 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * 	  statements with "retain_dead_tuples = true" and "origin = any", and for
+ * 	  ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
  *    when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
@@ -2314,24 +2426,28 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION, subrel_local_oids
+	 * contains the list of relation oids that are already present on the
+	 * subscriber. This check should be skipped for these tables if checking
+	 * for table sync scenario. However, when handling the retain_dead_tuples
+	 * scenario, ensure all tables are checked, as some existing tables may now
+	 * include changes from other origins due to newly created subscriptions on
+	 * the publisher.
 	 */
 	if (check_table_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								schemaname, tablename);
+			}
 		}
 	}
 
@@ -2611,6 +2727,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f262e7a66f7..b58e81424ab 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9888e33d8df..25cf78e0e30 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10958,11 +10958,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 69a91826254..f452ea1542a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2309,7 +2309,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9cc52a7c83f..3fe60ae82cd 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..2a0f49cb742 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllTablesPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..c2e9583cdb7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index dce8c672b40..5d58e57585f 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index a98c97f7616..629e2617f63 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250728-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250728-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 7c36a53ec2f55c3c5ecb820aa0cf71eca7a6508a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sun, 20 Jul 2025 18:23:41 +0530
Subject: [PATCH v20250728 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 102 +++-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 573 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 783 insertions(+), 379 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..d7d33c8b709 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..b4e45fb647f 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,14 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -881,11 +884,32 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_generated_columns_given,
 							  &publish_generated_columns);
 
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters do not affect sequence synchronization"));
+	}
+
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -918,7 +942,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1000,6 +1024,25 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters do not affect sequence synchronization"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1440,6 +1483,7 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1496,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -2019,19 +2070,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 73345bb3c70..9888e33d8df 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -445,7 +450,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10677,7 +10683,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10697,13 +10708,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10815,6 +10827,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19627,6 +19661,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 6298edb26b5..be8097d1137 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4458,6 +4458,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4488,9 +4489,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4506,6 +4512,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4526,6 +4533,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4577,52 +4586,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 93a4475d51b..c1003b391a1 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -664,6 +664,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 6c7ec80e271..797fd1f7839 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3330,6 +3330,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7a06af48842..6c8563fa4a4 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index dbc586c5bc3..69a91826254 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3576,11 +3576,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3a2eacd793f..a6c6be26f6e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'WARNING';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+WARNING:  WITH clause parameters do not affect sequence synchronization
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +333,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +351,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +383,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +399,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +418,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +429,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +465,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +478,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +596,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +891,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1084,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1295,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1338,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1331,7 +1412,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1340,10 +1421,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1434,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1463,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1489,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1560,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1571,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1592,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1604,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1616,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1627,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1638,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1649,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1680,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1692,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1774,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1795,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1930,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1961,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index c9e309190df..0256fa6b100 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'WARNING';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 4353befab99..0701234b313 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2351,6 +2351,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250728-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250728-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From f11f674b38fdebe02665d2bd6e867cb66dfd6a9d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250728 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 63c2992d19f..b6ba367b877 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -489,13 +489,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 1fa931a7422..b0eb3967d8f 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3fea0a0206e..1d504f2af28 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -665,37 +613,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1332,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1573,77 +1490,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1729,7 +1575,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1747,7 +1593,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index b59221c4d06..cf9a286ee55 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1209,7 +1209,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1331,7 +1331,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1387,7 +1387,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1453,7 +1453,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1588,7 +1588,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2430,7 +2430,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3940,7 +3940,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5411,7 +5411,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index aa1589e3331..d199b740749 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -243,7 +243,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index be8097d1137..edf5063dec0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5228,7 +5228,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5286,7 +5286,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 0c7b8440a61..363c31ff1cf 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -247,6 +247,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -270,9 +272,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 0701234b313..4d76bedf9dc 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2915,7 +2915,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250728-0005-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250728-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 1a6a4bdb41e00e1eef13f836c484f9994b9a860d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 23 Jul 2025 11:52:27 +0530
Subject: [PATCH v20250728 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 585 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 136 +++-
 src/backend/replication/logical/tablesync.c   |  98 +--
 src/backend/replication/logical/worker.c      |  69 ++-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  28 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 26 files changed, 1226 insertions(+), 174 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index dabd87a622d..27d93922800 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -337,7 +337,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index a0b1a0ef56f..a3095cb2da0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1405,6 +1405,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index f5fa49517cf..708306b3b1c 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1830,6 +1832,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 25ca2898be4..4aa7c58ba83 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1085,7 +1085,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2020,7 +2020,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 116ddf7b835..81e0e369fb0 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 742d9ba68e9..07301f1817b 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -243,19 +243,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -265,7 +264,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -326,6 +325,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -414,7 +414,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -503,8 +504,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -627,13 +636,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -700,7 +709,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -830,6 +839,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_start_time of the sequencesync worker in the subscription's
+ * apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -878,7 +906,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1244,7 +1272,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1545,7 +1573,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1585,6 +1613,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..329ce71d49c
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,585 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	check_and_launch_sync_worker(InvalidOid,
+								 &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			int64		last_value;
+			bool		is_called;
+			int64		log_cnt;
+			XLogRecPtr	page_lsn;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *subsequences;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	List	   *sequences_to_copy = NIL;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	subsequences = GetSubscriptionRelations(subid, false, true, true);
+
+	foreach_ptr(SubscriptionRelState, subseq, subsequences)
+	{
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(subseq->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subseq->relid;
+		seq_info->remote_seq_fetched = false;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	list_free_deep(sequences_to_copy);
+
+	if (!run_as_owner && (subsequences != NIL))
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..bc422b32e31 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	logicalrep_reset_seqsync_start_time();
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +102,60 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a worker
+ * available and the retry interval has elapsed.
+ *
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ *
+ * The caller must hold LogicalRepWorkerLock before calling this function.
+ */
+void
+check_and_launch_sync_worker(Oid relid, TimestampTz *last_start_time)
+{
+	int			nsyncworkers;
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+
+	/* Now safe to release the LWLock */
+	LWLockRelease(LogicalRepWorkerLock);
+
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +163,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +176,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +203,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +231,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +240,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +253,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +282,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 1d504f2af28..f4696343e35 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -523,50 +523,16 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			}
 			else
 			{
-				/*
-				 * If there is no sync worker for this table yet, count
-				 * running sync workers for this subscription, while we have
-				 * the lock.
-				 */
-				int			nsyncworkers =
-					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool            found;
 
-				/* Now safe to release the LWLock */
-				LWLockRelease(LogicalRepWorkerLock);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				check_and_launch_sync_worker(rstate->relid,
+											 &hentry->last_start_time);
 			}
 		}
 	}
@@ -1249,7 +1215,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1522,7 +1488,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1567,7 +1534,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1575,7 +1542,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1589,23 +1556,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index cf9a286ee55..bcf937e0efe 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -668,6 +668,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1208,7 +1213,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1330,7 +1338,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1386,7 +1397,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1452,7 +1466,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1587,7 +1604,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2429,7 +2449,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3939,7 +3962,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5118,7 +5144,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5238,8 +5265,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5342,6 +5369,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5361,14 +5392,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5439,6 +5472,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5451,9 +5488,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..80181825240 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 3fe60ae82cd..29f3cc6d1fb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 363c31ff1cf..26ed37ffef4 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -102,6 +103,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -251,6 +254,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -259,12 +263,15 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void check_and_launch_sync_worker(Oid relid, TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -274,11 +281,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -343,15 +351,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 5d58e57585f..8b2c407ccdb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2183,6 +2183,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2193,7 +2194,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b8a89275f13
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 4d76bedf9dc..f559b628a5b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1628,6 +1628,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250728-0006-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250728-0006-Documentation-for-sequence-synchronization.patchDownload
From 5648adb2e927b4ee0f7aefb21c6b3515cd1a3220 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250728 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  55 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 462 insertions(+), 61 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 97f547b3cc4..d5acc7d9b0a 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8160,16 +8160,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8203,7 +8206,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8212,12 +8215,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 20ccb2d6b54..80dc1d785a4 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5190,9 +5190,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5333,8 +5333,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5357,10 +5357,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index fcac55aefe6..1de1c55341c 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,223 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, an ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are either recreated on
+    the publisher, dropped on the subscriber, or removed from the
+    synchronization list. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this issue, either recreate the missing sequence on the
+    publisher using <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>,
+    drop the sequences on the subscriber if they are no longer needed using
+    <link linkend="sql-dropsequence"><command>DROP SEQUENCE</command></link>,
+    or run <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+    remove these sequences from synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2072,16 +2293,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2405,8 +2629,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2419,8 +2643,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 823afe1b30b..a1a2be86d38 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index d48cdc76bd3..4922db489d3 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index b8cd15f3280..298480d38eb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#286shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#285)
Re: Logical Replication of sequences

On Mon, Jul 28, 2025 at 3:37 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, the attached v20250728 version patch has the
changes for the same.

Thanks for the patches, please find a few comments:

1)
WARNING: WITH clause parameters do not affect sequence synchronization

a)
How about:
WITH clause parameters are not applicable to sequence synchronization
or
WITH clause parameters are not applicable to sequence synchronization
and will be ignored.

b)
Should it be NOTICE OR WARNING? I feel NOTICE Is more appropriate as
it is more of an information than a warning since it has no negative
consequences.

2)
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-   List *validate_publications)
+   List *validate_publications, bool refresh_tables,
+   bool refresh_sequences, bool resync_all_sequences)
 {

Do we need 3 new arguments? If we notice, 'refresh_sequences' is
always true in all cases. I feel only the last one should suffice.
IIUC, this is the state:

When resync_all_sequences is true:
it indicates it is 'REFRESH PUBLICATION SEQUENCES', that means we have
to refresh new sequences and resync all sequences.

When resync_all_sequences is false:
That means it is 'REFRESH PUBLICATION', we have to refresh new tables
and new sequences alone.

So if the caller pass only 'resync_all_sequences', we should be able
to drive the rest of the values internally.

3)
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside
a transaction block

In the same script, we can test REFRESH PUBLICATION SEQUENCES also in
trancsation block.

4)
Commit message of patch004 says:

This patch introduce a new command to synchronize the sequences of
a subscription:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES

a)
introduce --> introduces

b)
We should also add:

This patch also changes the scope of
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
This command now also considers sequences (newly added or dropped ones).

5)
+ * Reset the last_start_time of the sequencesync worker in the subscription's
+ * apply worker.

last_start_time-->last_seqsync_start_time

6)
alter_subscription.sgml has this:
<term><literal>refresh</literal> (<type>boolean</type>)</term>
<listitem>
<para>
When false, the command will not try to refresh table information.
<literal>REFRESH PUBLICATION</literal> should then be
executed separately.
The default is <literal>true</literal>.
</para>
</listitem>
</varlistentry>

Shouldn't we mention sequence too here:
When false, the command will not try to refresh table and sequence information.

thanks
Shveta

#287shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#286)
Re: Logical Replication of sequences

On Wed, Jul 30, 2025 at 11:16 AM shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jul 28, 2025 at 3:37 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, the attached v20250728 version patch has the
changes for the same.

Thanks for the patches, please find a few comments:

1)
WARNING: WITH clause parameters do not affect sequence synchronization

a)
How about:
WITH clause parameters are not applicable to sequence synchronization
or
WITH clause parameters are not applicable to sequence synchronization
and will be ignored.

b)
Should it be NOTICE OR WARNING? I feel NOTICE Is more appropriate as
it is more of an information than a warning since it has no negative
consequences.

2)
AlterSubscription_refresh(Subscription *sub, bool copy_data,
-   List *validate_publications)
+   List *validate_publications, bool refresh_tables,
+   bool refresh_sequences, bool resync_all_sequences)
{

Do we need 3 new arguments? If we notice, 'refresh_sequences' is
always true in all cases. I feel only the last one should suffice.
IIUC, this is the state:

When resync_all_sequences is true:
it indicates it is 'REFRESH PUBLICATION SEQUENCES', that means we have
to refresh new sequences and resync all sequences.

When resync_all_sequences is false:
That means it is 'REFRESH PUBLICATION', we have to refresh new tables
and new sequences alone.

So if the caller pass only 'resync_all_sequences', we should be able
to drive the rest of the values internally.

3)
ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside
a transaction block

In the same script, we can test REFRESH PUBLICATION SEQUENCES also in
trancsation block.

4)
Commit message of patch004 says:

This patch introduce a new command to synchronize the sequences of
a subscription:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES

a)
introduce --> introduces

b)
We should also add:

This patch also changes the scope of
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
This command now also considers sequences (newly added or dropped ones).

5)
+ * Reset the last_start_time of the sequencesync worker in the subscription's
+ * apply worker.

last_start_time-->last_seqsync_start_time

6)
alter_subscription.sgml has this:
<term><literal>refresh</literal> (<type>boolean</type>)</term>
<listitem>
<para>
When false, the command will not try to refresh table information.
<literal>REFRESH PUBLICATION</literal> should then be
executed separately.
The default is <literal>true</literal>.
</para>
</listitem>
</varlistentry>

Shouldn't we mention sequence too here:
When false, the command will not try to refresh table and sequence information.

7)
I am trying to understand the flow of check_and_launch_sync_worker().
We acquire the lock(LogicalRepWorkerLock) in the caller and release it
here. This does not look appropriate. I guess, both
logicalrep_worker_find() and logicalrep_sync_worker_count() need lock
to be held, that is why we have done this. I see that
logicalrep_worker_launch() (invoked by check_and_launch_sync_worker())
also does logicalrep_sync_worker_count() and also tries
garbage-collection once. Shouldn't that suffice? Or is there any
reason to call logicalrep_sync_worker_count() in
check_and_launch_sync_worker() additionally? If
logicalrep_sync_worker_count() is not needed to be called from
check_and_launch_sync_worker(), the LOCK problem is sorted.

thanks
Shveta

#288vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#286)
6 attachment(s)
Re: Logical Replication of sequences

On Wed, 30 Jul 2025 at 11:16, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Jul 28, 2025 at 3:37 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, the attached v20250728 version patch has the
changes for the same.

Thanks for the patches, please find a few comments:

1)
WARNING: WITH clause parameters do not affect sequence synchronization

a)
How about:
WITH clause parameters are not applicable to sequence synchronization
or
WITH clause parameters are not applicable to sequence synchronization
and will be ignored.

Modified

b)
Should it be NOTICE OR WARNING? I feel NOTICE Is more appropriate as
it is more of an information than a warning since it has no negative
consequences.

Modified

2)
AlterSubscription_refresh(Subscription *sub, bool copy_data,
-   List *validate_publications)
+   List *validate_publications, bool refresh_tables,
+   bool refresh_sequences, bool resync_all_sequences)
{

Do we need 3 new arguments? If we notice, 'refresh_sequences' is
always true in all cases. I feel only the last one should suffice.
IIUC, this is the state:

When resync_all_sequences is true:
it indicates it is 'REFRESH PUBLICATION SEQUENCES', that means we have
to refresh new sequences and resync all sequences.

When resync_all_sequences is false:
That means it is 'REFRESH PUBLICATION', we have to refresh new tables
and new sequences alone.

So if the caller pass only 'resync_all_sequences', we should be able
to drive the rest of the values internally.

Modified

3)
ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside
a transaction block

In the same script, we can test REFRESH PUBLICATION SEQUENCES also in
trancsation block.

Added it

4)
Commit message of patch004 says:

This patch introduce a new command to synchronize the sequences of
a subscription:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES

a)
introduce --> introduces

b)
We should also add:

This patch also changes the scope of
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
This command now also considers sequences (newly added or dropped ones).

Modified the commit message now to add this information

5)
+ * Reset the last_start_time of the sequencesync worker in the subscription's
+ * apply worker.

last_start_time-->last_seqsync_start_time

Modified

6)
alter_subscription.sgml has this:
<term><literal>refresh</literal> (<type>boolean</type>)</term>
<listitem>
<para>
When false, the command will not try to refresh table information.
<literal>REFRESH PUBLICATION</literal> should then be
executed separately.
The default is <literal>true</literal>.
</para>
</listitem>
</varlistentry>

Shouldn't we mention sequence too here:
When false, the command will not try to refresh table and sequence information.

Modified

Also, the comment from [1]/messages/by-id/CAJpy0uBCOmoyc44J46PpHbip0Sovqm99cL=AJoAErXG0EN2Duw@mail.gmail.com has been addressed — the lock is now
released from the caller function itself. Previously, the lock was
required to fetch the number of sync workers running, but this has
been refactored to retrieve the count within the caller function
instead.

Thanks for the comments, the attached patch has the changes for the same.
[1]: /messages/by-id/CAJpy0uBCOmoyc44J46PpHbip0Sovqm99cL=AJoAErXG0EN2Duw@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250801-0001-Enhance-pg_get_sequence_data-function.patchapplication/octet-stream; name=v20250801-0001-Enhance-pg_get_sequence_data-function.patchDownload
From b78678d5171746211f53d1608f566885cb50aac7 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sun, 20 Jul 2025 18:19:47 +0530
Subject: [PATCH v20250801 1/6] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func.sgml                 | 26 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 +++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 47 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 74a16af04ad..af63e1b6116 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19948,6 +19948,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        is the current value of the sequence, <literal>is_called</literal>
+        indicates whether the sequence has been used, <literal>log_cnt</literal>
+        shows how many fetches remain before a new WAL record must be written,
+        and <literal>page_lsn</literal> is the page LSN of the sequence
+        relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..f5fa49517cf 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1778,15 +1779,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1801,6 +1803,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1816,11 +1822,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 3ee8fed7e53..9cc52a7c83f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250801-0005-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250801-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 1b7e41c9f3899991fa462e9b9cd6bb88109c3521 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 23 Jul 2025 11:52:27 +0530
Subject: [PATCH v20250801 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 593 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 125 +++-
 src/backend/replication/logical/tablesync.c   |  88 +--
 src/backend/replication/logical/worker.c      |  69 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  25 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   8 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 26 files changed, 1224 insertions(+), 164 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index dabd87a622d..27d93922800 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -337,7 +337,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index a0b1a0ef56f..a3095cb2da0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1405,6 +1405,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index f5fa49517cf..708306b3b1c 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1830,6 +1832,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1aa93f05aea..98998c08387 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1079,7 +1079,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2012,7 +2012,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 742d9ba68e9..dae190f7c59 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -243,19 +243,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -265,7 +264,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -326,6 +325,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -414,7 +414,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -503,8 +504,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -627,13 +636,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -700,7 +709,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -830,6 +839,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -878,7 +906,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1244,7 +1272,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1545,7 +1573,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1585,6 +1613,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..d1a6db91bbe
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,593 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	check_and_launch_sync_worker(nsyncworkers, InvalidOid,
+								 &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that are missing on the publisher, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			int64		last_value;
+			bool		is_called;
+			int64		log_cnt;
+			XLogRecPtr	page_lsn;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+				elog(ERROR, "cache lookup failed for sequence \"%s.%s\"",
+					 seqinfo->nspname, seqinfo->seqname);
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn);
+				if (message_level_is_interesting(DEBUG1))
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	List	   *subsequences;
+	AclResult	aclresult;
+	UserContext ucxt;
+	bool		run_as_owner = false;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	List	   *sequences_to_copy = NIL;
+
+	StartTransactionCommand();
+
+	/* Get the sequences that should be synchronized. */
+	subsequences = GetSubscriptionRelations(subid, false, true, true);
+
+	foreach_ptr(SubscriptionRelState, subseq, subsequences)
+	{
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		sequence_rel = table_open(subseq->relid, RowExclusiveLock);
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Make sure that the copy command runs as the sequence owner, unless
+		 * the user has opted out of that behaviour.
+		 */
+		run_as_owner = MySubscription->runasowner;
+		if (!run_as_owner)
+			SwitchToUntrustedUser(sequence_rel->rd_rel->relowner, &ucxt);
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_INSERT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subseq->relid;
+		seq_info->remote_seq_fetched = false;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	list_free_deep(sequences_to_copy);
+
+	if (!run_as_owner && (subsequences != NIL))
+		RestoreUserContext(&ucxt);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..ca5c278cd27 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	logicalrep_reset_seqsync_start_time();
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +102,49 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a worker
+ * available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+check_and_launch_sync_worker(int nsyncworkers, Oid relid,
+							 TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 1d504f2af28..cba858a5522 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -334,7 +334,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -376,9 +376,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -411,6 +408,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -424,11 +429,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -465,8 +465,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -530,43 +530,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				check_and_launch_sync_worker(nsyncworkers, rstate->relid,
+											 &hentry->last_start_time);
 			}
 		}
 	}
@@ -1249,7 +1225,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1522,7 +1498,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1567,7 +1544,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1575,7 +1552,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1589,23 +1566,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index cf9a286ee55..bcf937e0efe 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -668,6 +668,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1208,7 +1213,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1330,7 +1338,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1386,7 +1397,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1452,7 +1466,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1587,7 +1604,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2429,7 +2449,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3939,7 +3962,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5118,7 +5144,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5238,8 +5265,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5342,6 +5369,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5361,14 +5392,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5439,6 +5472,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5451,9 +5488,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1c12ddbae49..ab061d0ba9b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	11
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,23 +2189,25 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2222,6 +2224,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..80181825240 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 3fe60ae82cd..29f3cc6d1fb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index a541f4843bd..49af743b20d 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,14 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 363c31ff1cf..cfb3abf8d27 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -102,6 +103,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -251,6 +254,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -259,12 +263,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void check_and_launch_sync_worker(int nsyncworkers, Oid relid,
+										 TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -274,11 +282,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -343,15 +352,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 5d58e57585f..8b2c407ccdb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2183,6 +2183,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2193,7 +2194,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b8a89275f13
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 93ad46f33c0..f8777a7009f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250801-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250801-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 14822086201852dc532f698901858c017e5378b7 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 23 Jul 2025 11:34:50 +0530
Subject: [PATCH v20250801 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command synchronizes the set of sequences associated with a
subscription based on the sequences currently present in the publication
on the publisher. It also marks the corresponding entries in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  64 +++-
 src/backend/catalog/pg_subscription.c       |  60 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 374 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |   8 +
 src/test/regress/expected/subscription.out  |   8 +-
 src/test/regress/sql/subscription.sql       |   4 +
 15 files changed, 438 insertions(+), 126 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d7d33c8b709..3f15ccaac04 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,7 +777,7 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
  * should use GetAllTablesPublicationRelations().
  */
 List *
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllTablesPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllTablesPublicationRelations(RELKIND_RELATION,
+																   pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,50 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllTablesPublicationRelations(RELKIND_SEQUENCE,
+														 false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b6ba367b877..dabd87a622d 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -463,7 +463,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -500,7 +502,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -512,8 +515,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -525,12 +542,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -539,6 +566,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -561,9 +591,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index f6eca09ee15..a0b1a0ef56f 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index cd6c3684482..1aa93f05aea 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,6 +107,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -715,6 +717,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -726,9 +734,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -743,6 +748,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -755,16 +764,19 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -773,7 +785,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				CheckSubscriptionRelkind(get_rel_relkind(relid),
 										 rv->schemaname, rv->relname);
 
-				AddSubscriptionRelState(subid, relid, table_state,
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -781,6 +793,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -804,7 +821,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -843,18 +860,49 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'resync_all_sequences' is false:
+ *     Add or remove tables and sequences that have been added to or removed
+ * 	   from the publication since the last subscription creation or refresh.
+ * If 'resync_all_sequences' is true:
+ *     Perform the above operation only for sequences.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -862,9 +910,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
+	bool		refresh_tables = !resync_all_sequences;
 
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
@@ -885,16 +935,22 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names,
+								   fetch_sequence_list(wrconn,
+													   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -907,22 +963,19 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->retaindeadtuples, sub->origin,
+									  subrel_local_oids, subrel_count,
+									  sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -931,12 +984,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -947,28 +1001,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -990,41 +1064,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1033,10 +1121,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1050,11 +1138,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1538,8 +1628,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1553,7 +1643,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, false);
 				}
 
 				break;
@@ -1593,8 +1683,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1612,18 +1702,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1635,8 +1725,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1648,12 +1738,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, false);
+
+				break;
+			}
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, true);
 
 				break;
 			}
@@ -1931,7 +2035,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2246,16 +2350,16 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * 	  statements with "retain_dead_tuples = true" and "origin = any", and for
+ * 	  ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
  *    when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
@@ -2314,24 +2418,28 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2611,6 +2719,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f262e7a66f7..b58e81424ab 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -877,7 +877,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 740cc910870..9cefecf1da1 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10983,11 +10983,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 057f7b4879c..9b5eefe7cbe 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9cc52a7c83f..3fe60ae82cd 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..2a0f49cb742 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllTablesPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index ea869588d84..a541f4843bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..c2e9583cdb7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index dce8c672b40..5d58e57585f 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index a98c97f7616..0042d0b0f07 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,11 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
+END;
+BEGIN;
+ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION SEQUENCES;
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/regress/sql/subscription.sql b/src/test/regress/sql/subscription.sql
index f0f714fe747..4ace5f4fa95 100644
--- a/src/test/regress/sql/subscription.sql
+++ b/src/test/regress/sql/subscription.sql
@@ -240,6 +240,10 @@ BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
 END;
 
+BEGIN;
+ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION SEQUENCES;
+END;
+
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
 SELECT func();
-- 
2.43.0

v20250801-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250801-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 8e26ee9da1969b013afc6ffea5353291a222b826 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250801 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 63c2992d19f..b6ba367b877 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -489,13 +489,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 1fa931a7422..b0eb3967d8f 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3fea0a0206e..1d504f2af28 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -348,9 +296,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -386,7 +334,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -413,8 +361,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -429,7 +377,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -567,8 +515,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -665,37 +613,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1332,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1573,77 +1490,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1729,7 +1575,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1747,7 +1593,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index b59221c4d06..cf9a286ee55 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1209,7 +1209,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1331,7 +1331,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1387,7 +1387,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1453,7 +1453,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1588,7 +1588,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2430,7 +2430,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -3940,7 +3940,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5411,7 +5411,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index d1894420fcf..1479c4a955d 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5235,7 +5235,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5293,7 +5293,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index c91797c869c..ea869588d84 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 0c7b8440a61..363c31ff1cf 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -247,6 +247,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -270,9 +272,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index f9bdc1615e6..93ad46f33c0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2915,7 +2915,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250801-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250801-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From a0da6e51aad643d9ae564201b07b9ce9d9fe6aed Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sun, 20 Jul 2025 18:23:41 +0530
Subject: [PATCH v20250801 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 102 +++-
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 573 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 783 insertions(+), 379 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..d7d33c8b709 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1bf7eaae5b3..681f4369411 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,14 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -881,11 +884,32 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_generated_columns_given,
 							  &publish_generated_columns);
 
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -918,7 +942,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1000,6 +1024,25 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1440,6 +1483,7 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1496,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -2019,19 +2070,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index db43034b9db..740cc910870 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10702,7 +10708,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10722,13 +10733,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10840,6 +10852,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19615,6 +19649,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 273117c977c..d1894420fcf 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4465,6 +4465,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4495,9 +4496,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4513,6 +4519,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4533,6 +4540,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4584,52 +4593,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index dde85ed156c..75e52e2a1ac 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 6c7ec80e271..797fd1f7839 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3330,6 +3330,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7a06af48842..6c8563fa4a4 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 1f2ca946fc5..057f7b4879c 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3582,11 +3582,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 1ec3fa34a2d..3ffcf7e3d60 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -37,20 +37,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns = foo);
 ERROR:  invalid value for publication parameter "publish_generated_columns": "foo"
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -67,15 +67,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -94,10 +94,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -106,20 +106,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -130,10 +130,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -151,10 +151,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -163,10 +163,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -177,10 +177,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -204,10 +204,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -219,24 +219,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  WITH clause parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -252,10 +333,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -270,10 +351,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -302,10 +383,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -318,10 +399,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -337,10 +418,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -348,10 +429,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -384,10 +465,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -397,10 +478,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -515,10 +596,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -810,10 +891,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1003,10 +1084,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1214,10 +1295,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1257,10 +1338,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1331,7 +1412,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1340,10 +1421,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1353,20 +1434,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1382,19 +1463,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1408,44 +1489,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1479,10 +1560,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1490,20 +1571,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1511,10 +1592,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1523,10 +1604,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1535,10 +1616,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1546,10 +1627,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1557,10 +1638,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1568,29 +1649,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1599,10 +1680,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1611,10 +1692,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1693,18 +1774,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1714,20 +1795,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1849,26 +1930,26 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 CREATE PUBLICATION pub3 FOR ALL TABLES WITH (publish_generated_columns);
 \dRp+ pub3
-                                                Publication pub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1880,50 +1961,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 2585f083181..f6dd538cb5c 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -119,6 +119,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e6f2e93b2d6..f9bdc1615e6 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2351,6 +2351,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250801-0006-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250801-0006-Documentation-for-sequence-synchronization.patchDownload
From 3b56401f58cca6f045d00ab44e5188e50ca91657 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250801 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  61 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 465 insertions(+), 64 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 97f547b3cc4..d5acc7d9b0a 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8160,16 +8160,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8203,7 +8206,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8212,12 +8215,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 20ccb2d6b54..80dc1d785a4 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5190,9 +5190,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5333,8 +5333,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5357,10 +5357,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index fcac55aefe6..1de1c55341c 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,223 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, an ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are either recreated on
+    the publisher, dropped on the subscriber, or removed from the
+    synchronization list. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this issue, either recreate the missing sequence on the
+    publisher using <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>,
+    drop the sequences on the subscriber if they are no longer needed using
+    <link linkend="sql-dropsequence"><command>DROP SEQUENCE</command></link>,
+    or run <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+    remove these sequences from synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2072,16 +2293,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2405,8 +2629,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2419,8 +2643,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 823afe1b30b..a1a2be86d38 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index d48cdc76bd3..cdfe1373cd8 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -139,9 +141,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index b8cd15f3280..298480d38eb 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#289Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#288)
RE: Logical Replication of sequences

Dear Vignesh,

I played with your patch and found something.

01.
In LogicalRepSyncSequences() and GetSubscriptionRelations(), there is a possibility
that the sequence on the subscriber could be dropped before opens that.
This can cause `could not open relation with OID %u` error, which is not user-friendly.
Can we avoid that? Even if it is difficult we should add ereport().

02.
```
/*
* Check that our sequencesync worker has permission to insert into
* the target sequence.
*/
aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
ACL_INSERT);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult,
get_relkind_objtype(sequence_rel->rd_rel->relkind),
seqname);
```

Hmm, but upcoming SetSequence() needs UPDATE privilege. I feel it should be checked.

03.
Similar with 1, sequences can be dropped just before entering copy_sequences().
This can cause `cache lookup failed for sequence` error, which cannot be translated.
Can we avoid that or change the error-function to erport()?

04.
```
if (message_level_is_interesting(DEBUG1))
ereport(DEBUG1,
errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
MySubscription->name,
seqinfo->nspname,
seqinfo->seqname));
```

I feel no need to add if-statement because we do not touch additional data here.

05.
```
list_free_deep(sequences_to_copy);
```

IIUC, this function free's each elements and list itself, but they do no-op for
attributes of elements. Can we pfree() for seqname and nspname?

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#290vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#289)
6 attachment(s)
Re: Logical Replication of sequences

On Fri, 1 Aug 2025 at 16:18, Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Dear Vignesh,

I played with your patch and found something.

01.
In LogicalRepSyncSequences() and GetSubscriptionRelations(), there is a possibility
that the sequence on the subscriber could be dropped before opens that.
This can cause `could not open relation with OID %u` error, which is not user-friendly.
Can we avoid that? Even if it is difficult we should add ereport().

Addressed this by taking RowExclusiveLock on the sequence while
preparing the list

02.
```
/*
* Check that our sequencesync worker has permission to insert into
* the target sequence.
*/
aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
ACL_INSERT);
if (aclresult != ACLCHECK_OK)
aclcheck_error(aclresult,
get_relkind_objtype(sequence_rel->rd_rel->relkind),
seqname);
```

Hmm, but upcoming SetSequence() needs UPDATE privilege. I feel it should be checked.

Modified

03.
Similar with 1, sequences can be dropped just before entering copy_sequences().
This can cause `cache lookup failed for sequence` error, which cannot be translated.
Can we avoid that or change the error-function to erport()?

Changed it to log this sequence concurrently dropped

04.
```
if (message_level_is_interesting(DEBUG1))
ereport(DEBUG1,
errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
MySubscription->name,
seqinfo->nspname,
seqinfo->seqname));
```

I feel no need to add if-statement because we do not touch additional data here.

Modified

05.
```
list_free_deep(sequences_to_copy);
```

IIUC, this function free's each elements and list itself, but they do no-op for
attributes of elements. Can we pfree() for seqname and nspname?

Modified

The attached v20250806 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250806-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250806-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 6f1da678684d93978a3760a819129c04f7e76812 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250806 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 244acf52f36..60ce2016c7e 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -504,13 +504,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 1fa931a7422..b0eb3967d8f 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index d3356bc84ee..3c777363243 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 89e241c8392..4e751920046 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1215,7 +1215,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1337,7 +1337,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1393,7 +1393,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1459,7 +1459,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1594,7 +1594,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2436,7 +2436,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4083,7 +4083,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5555,7 +5555,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 40c3add0b21..458990f8b34 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5223,7 +5223,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5281,7 +5281,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f458447a0e5..9a223b8076a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 7c0204dd6f4..7920908395d 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -248,6 +248,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -271,9 +273,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index f9bdc1615e6..93ad46f33c0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2915,7 +2915,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250806-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250806-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From b8b73b68ef10a4a77bc70fd27faa30dd31b5f8cd Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 23 Jul 2025 11:34:50 +0530
Subject: [PATCH v20250806 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command synchronizes the set of sequences associated with a
subscription based on the sequences currently present in the publication
on the publisher. It also marks the corresponding entries in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  64 +++-
 src/backend/catalog/pg_subscription.c       |  60 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 374 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |   8 +
 src/test/regress/expected/subscription.out  |   8 +-
 src/test/regress/sql/subscription.sql       |   4 +
 15 files changed, 438 insertions(+), 126 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d7d33c8b709..3f15ccaac04 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,7 +777,7 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
  * should use GetAllTablesPublicationRelations().
  */
 List *
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllTablesPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllTablesPublicationRelations(RELKIND_RELATION,
+																   pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,50 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllTablesPublicationRelations(RELKIND_SEQUENCE,
+														 false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 60ce2016c7e..28622e54aaa 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -478,7 +478,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -515,7 +517,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -527,8 +530,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -540,12 +557,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -554,6 +581,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -576,9 +606,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 77c693f630e..d5250c40a3b 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index cd6c3684482..1aa93f05aea 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,6 +107,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -715,6 +717,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -726,9 +734,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -743,6 +748,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -755,16 +764,19 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -773,7 +785,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				CheckSubscriptionRelkind(get_rel_relkind(relid),
 										 rv->schemaname, rv->relname);
 
-				AddSubscriptionRelState(subid, relid, table_state,
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -781,6 +793,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -804,7 +821,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -843,18 +860,49 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'resync_all_sequences' is false:
+ *     Add or remove tables and sequences that have been added to or removed
+ * 	   from the publication since the last subscription creation or refresh.
+ * If 'resync_all_sequences' is true:
+ *     Perform the above operation only for sequences.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -862,9 +910,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
+	bool		refresh_tables = !resync_all_sequences;
 
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
@@ -885,16 +935,22 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names,
+								   fetch_sequence_list(wrconn,
+													   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -907,22 +963,19 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->retaindeadtuples, sub->origin,
+									  subrel_local_oids, subrel_count,
+									  sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -931,12 +984,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -947,28 +1001,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -990,41 +1064,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1033,10 +1121,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1050,11 +1138,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1538,8 +1628,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1553,7 +1643,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, false);
 				}
 
 				break;
@@ -1593,8 +1683,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1612,18 +1702,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1635,8 +1725,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1648,12 +1738,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, false);
+
+				break;
+			}
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, true);
 
 				break;
 			}
@@ -1931,7 +2035,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2246,16 +2350,16 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * 	  statements with "retain_dead_tuples = true" and "origin = any", and for
+ * 	  ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
  *    when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
@@ -2314,24 +2418,28 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2611,6 +2719,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 68184f5d671..2005884e03c 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 740cc910870..9cefecf1da1 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10983,11 +10983,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 057f7b4879c..9b5eefe7cbe 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62bcd9d921c..4660e42d775 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..2a0f49cb742 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllTablesPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9a223b8076a..87fd96e0ff5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..c2e9583cdb7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6509fda77a9..858c58cf464 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index a98c97f7616..0042d0b0f07 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,11 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
+END;
+BEGIN;
+ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION SEQUENCES;
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/regress/sql/subscription.sql b/src/test/regress/sql/subscription.sql
index f0f714fe747..4ace5f4fa95 100644
--- a/src/test/regress/sql/subscription.sql
+++ b/src/test/regress/sql/subscription.sql
@@ -240,6 +240,10 @@ BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
 END;
 
+BEGIN;
+ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION SEQUENCES;
+END;
+
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
 SELECT func();
-- 
2.43.0

v20250806-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250806-0001-Enhance-pg_get_sequence_data-function.patchDownload
From 1f53e2a3ae5db0a8d2d208dec4ec9512341c98d4 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250806 1/6] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 26 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 +++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 47 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..f21935a7e31 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,32 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        is the current value of the sequence, <literal>is_called</literal>
+        indicates whether the sequence has been used, <literal>log_cnt</literal>
+        shows how many fetches remain before a new WAL record must be written,
+        and <literal>page_lsn</literal> is the page LSN of the sequence
+        relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..f5fa49517cf 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1778,15 +1779,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1801,6 +1803,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1816,11 +1822,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 118d6da1ace..62bcd9d921c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250806-0005-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250806-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 592257be445e1349262afe06d8f0f903c562b8c7 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 4 Aug 2025 18:51:55 +0530
Subject: [PATCH v20250806 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   6 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 641 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 125 +++-
 src/backend/replication/logical/tablesync.c   |  88 +--
 src/backend/replication/logical/worker.c      |  69 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   9 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 26 files changed, 1275 insertions(+), 166 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 28622e54aaa..383f01f83e2 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -352,7 +352,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index d5250c40a3b..3cc67b42387 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1405,6 +1405,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index f5fa49517cf..708306b3b1c 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1830,6 +1832,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1aa93f05aea..f3bedce9ddd 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1030,7 +1030,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				if (resync_all_sequences)
 				{
 					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
-											   InvalidXLogRecPtr);
+											   InvalidXLogRecPtr, false);
 					ereport(DEBUG1,
 							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
 											get_namespace_name(get_rel_namespace(relid)),
@@ -1079,7 +1079,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2012,7 +2012,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 37377f7eb63..60fb14861ab 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -243,19 +243,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -265,7 +264,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -326,6 +325,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -414,7 +414,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -503,8 +504,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -627,13 +636,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -700,7 +709,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -832,6 +841,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -880,7 +908,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1246,7 +1274,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1547,7 +1575,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1587,6 +1615,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..cb1a21f76a4
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,641 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	check_and_launch_sync_worker(nsyncworkers, InvalidOid,
+								 &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It reports sequences that are missing on the publisher, as well as sequences
+ * that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	bool		run_as_owner = MySubscription->runasowner;
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			int64		last_value;
+			bool		is_called;
+			int64		log_cnt;
+			XLogRecPtr	page_lsn;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Relation	sequence_rel;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!sequence_rel || !HeapTupleIsValid(tup))
+			{
+				elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+					 seqinfo->nspname, seqinfo->seqname);
+
+				batch_skipped_count++;
+				continue;
+			}
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				UserContext ucxt;
+
+				/*
+				 * Make sure that the copy command runs as the sequence owner,
+				 * unless the user has opted out of that behaviour.
+				 */
+				if (!run_as_owner)
+					SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				if (!run_as_owner)
+					RestoreUserContext(&ucxt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn, false);
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+										MySubscription->name,
+										seqinfo->nspname,
+										seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, NoLock);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_skipped_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	AclResult	aclresult;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	List	   *sequences_to_copy = NIL;
+	StringInfoData app_name;
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/*
+		 * Check that our sequencesync worker has permission to insert into
+		 * the target sequence.
+		 */
+		aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+									  ACL_UPDATE);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult,
+						   get_relkind_objtype(sequence_rel->rd_rel->relkind),
+						   seqname);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subrel->srrelid;
+		seq_info->remote_seq_fetched = false;
+		seq_info->seqowner = sequence_rel->rd_rel->relowner;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+	{
+		pfree(seq_info->seqname);
+		pfree(seq_info->nspname);
+		pfree(seq_info);
+	}
+
+	list_free(sequences_to_copy);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..ca5c278cd27 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	logicalrep_reset_seqsync_start_time();
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +102,49 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a worker
+ * available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+check_and_launch_sync_worker(int nsyncworkers, Oid relid,
+							 TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3c777363243..b87413ff267 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				check_and_launch_sync_worker(nsyncworkers, rstate->relid,
+											 &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 4e751920046..44061625b1d 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -674,6 +674,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1214,7 +1219,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1336,7 +1344,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1392,7 +1403,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1458,7 +1472,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1593,7 +1610,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2435,7 +2455,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4082,7 +4105,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5262,7 +5288,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5382,8 +5409,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5486,6 +5513,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5505,14 +5536,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5583,6 +5616,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5595,9 +5632,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..80181825240 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4660e42d775..dce0c2ce108 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 87fd96e0ff5..5e0a5a989c2 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,15 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+	Oid			seqowner;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 7920908395d..85e31e76cbe 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -103,6 +104,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -252,6 +255,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -260,12 +264,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void check_and_launch_sync_worker(int nsyncworkers, Oid relid,
+										 TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -275,11 +283,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -344,15 +353,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 858c58cf464..eddfa5784c9 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2183,6 +2183,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2194,7 +2195,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b8a89275f13
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 93ad46f33c0..f8777a7009f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250806-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250806-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From e563eaa981340cdb6e3f977aaae46921d624368a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250806 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 102 ++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 565 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 779 insertions(+), 375 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d6f94db5d99..d7d33c8b709 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 803c26ab216..6001f5428e8 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -848,11 +848,14 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -881,11 +884,32 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_generated_columns_given,
 							  &publish_generated_columns);
 
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -918,7 +942,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -1000,6 +1024,25 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1440,6 +1483,7 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1496,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -2019,19 +2070,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index db43034b9db..740cc910870 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10702,7 +10708,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10722,13 +10733,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10840,6 +10852,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19615,6 +19649,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index f3a353a61a5..40c3add0b21 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4453,6 +4453,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4483,9 +4484,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4501,6 +4507,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4521,6 +4528,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4572,52 +4581,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index dde85ed156c..75e52e2a1ac 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index a86b38466de..d8ffe78635f 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3323,6 +3323,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7a06af48842..6c8563fa4a4 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 1f2ca946fc5..057f7b4879c 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3582,11 +3582,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 236eba2540e..a06d4918789 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6443,9 +6443,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 53268059142..b689a6bd7c8 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  WITH clause parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +336,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +354,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +386,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +402,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +421,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +432,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +481,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +599,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +894,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1087,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1298,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1341,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1415,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1424,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1437,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1466,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1492,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1563,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1574,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1595,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1619,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1630,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1641,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1652,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1683,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1695,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1777,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1798,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1932,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1954,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index deddf0da844..d77bbc973f1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e6f2e93b2d6..f9bdc1615e6 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2351,6 +2351,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250806-0006-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250806-0006-Documentation-for-sequence-synchronization.patchDownload
From 3cf9579f163c039cf8016ee23916bf50bf964a50 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250806 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  61 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 465 insertions(+), 64 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index da8a7882580..2e0bedf9c6f 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8161,16 +8161,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8204,7 +8207,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8213,12 +8216,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 20ccb2d6b54..80dc1d785a4 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5190,9 +5190,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5333,8 +5333,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5357,10 +5357,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a0761cfee3f..f06feeab1f8 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,223 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, an ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are either recreated on
+    the publisher, dropped on the subscriber, or removed from the
+    synchronization list. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this issue, either recreate the missing sequence on the
+    publisher using <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>,
+    drop the sequences on the subscriber if they are no longer needed using
+    <link linkend="sql-dropsequence"><command>DROP SEQUENCE</command></link>,
+    or run <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+    remove these sequences from synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2309,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2645,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2659,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index fa78031ccbb..1ad83055316 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index d48cdc76bd3..cdfe1373cd8 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -139,9 +141,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 247c5bd2604..10a67288b39 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#291shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#290)
Re: Logical Replication of sequences

On Wed, Aug 6, 2025 at 2:28 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20250806 version patch has the changes for the same.

Thank You for the patches. Please find a few comments:

1)
* If 'resync_all_sequences' is false:
* Add or remove tables and sequences that have been added to or removed
* from the publication since the last subscription creation or refresh.
* If 'resync_all_sequences' is true:
* Perform the above operation only for sequences.

Shall we update:
Perform the above operation only for sequences and resync all the
sequences including existing ones.

2)
XLogRecPtr srsublsn BKI_FORCE_NULL; /* remote LSN of the
state change

Shall we rename it to srremotelsn or srremlsn? srsublsn gives a
feeling that it is local lsn and should be in sync with the one
displayed by pg_get_sequence_data() locally but that is not the case.

3)
create sequence myseq1 start 1 increment 100;
postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq1');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
1 | f | 0 | 0/017BEF10

postgres=# select sequencename, last_value from pg_sequences;
sequencename | last_value
--------------+------------
myseq1 |

For a fresh sequence created, last_value shown by pg_get_sequence_data
seems wrong. On doging nextval for the first time, last_value shown by
pg_get_sequence_data does not change as the original value was wrong
itself to start with.

4)
+        Returns information about the sequence. <literal>last_value</literal>
+        is the current value of the sequence, <literal>is_called</literal>

It looks odd to say that 'last_value is the current value of the
sequence'. Why don't we name it curr_val? If this is an existing
function and thus we do not want to change the name, then we can say
something on the line that 'last sequence value set in sequence by
nextval or setval' or something
similar to what pg_sequences says for last_value.

5)
+        and <literal>page_lsn</literal> is the page LSN of the sequence
+        relation.

Is the page_lsn the page lsn of sequence relation or lsn of the last
WAL record written (or in simpler terms that particular record's
page_lsn)? If it is relation page-lsn, it should not change.

6)
I have noticed that when I do nextval, logcnt reduces and page_lsn is
not changed until it crosses the threshold. This is in context of
output returned by pg_get_sequence_data. But on doing setval, page_lsn
changes everytime and logcnt is reset to 0. Is this expected behaviour
or the issue in output of pg_get_sequence_data()? I did not get this
information from setval's doc. Can you please review and confirm?

postgres=# SELECT nextval('myseq2');
nextval
---------
155
postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
155 | t | 28 | 0/017C4498

postgres=# SELECT nextval('myseq2');
nextval
---------
175

postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
175 | t | 27 | 0/017C4498

postgres=# SELECT setval('myseq2', 55, true);
setval
--------
55

postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
55 | t | 0 | 0/017C4568

thanks
Shveta

#292Nisha Moond
nisha.moond412@gmail.com
In reply to: vignesh C (#290)
Re: Logical Replication of sequences

On Wed, Aug 6, 2025 at 2:28 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20250806 version patch has the changes for the same.

Thank You for the patches.

patch-0005: sequencesync.c
+ aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+   ACL_UPDATE);
+ if (aclresult != ACLCHECK_OK)
+ aclcheck_error(aclresult,
+    get_relkind_objtype(sequence_rel->rd_rel->relkind),
+    seqname);

I see that the run_as_owner check has been removed from
LogicalRepSyncSequences() and added to copy_sequences() for the
SetSequence() call.

However, IIUC, the same check is also needed in
LogicalRepSyncSequences(). Currently, the sequencesync worker can fail
in the above permission check since user switching doesn’t happen when
run_as_owner is false.

```
ERROR: permission denied for sequence n1
```
Should we add the run_as_owner handling here as well to avoid this?

--
Thanks,
Nisha

#293shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#291)
Re: Logical Replication of sequences

On Wed, Aug 6, 2025 at 4:29 PM shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Aug 6, 2025 at 2:28 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20250806 version patch has the changes for the same.

Thank You for the patches. Please find a few comments:

1)
* If 'resync_all_sequences' is false:
* Add or remove tables and sequences that have been added to or removed
* from the publication since the last subscription creation or refresh.
* If 'resync_all_sequences' is true:
* Perform the above operation only for sequences.

Shall we update:
Perform the above operation only for sequences and resync all the
sequences including existing ones.

2)
XLogRecPtr srsublsn BKI_FORCE_NULL; /* remote LSN of the
state change

Shall we rename it to srremotelsn or srremlsn? srsublsn gives a
feeling that it is local lsn and should be in sync with the one
displayed by pg_get_sequence_data() locally but that is not the case.

3)
create sequence myseq1 start 1 increment 100;
postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq1');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
1 | f | 0 | 0/017BEF10

postgres=# select sequencename, last_value from pg_sequences;
sequencename | last_value
--------------+------------
myseq1 |

For a fresh sequence created, last_value shown by pg_get_sequence_data
seems wrong. On doging nextval for the first time, last_value shown by
pg_get_sequence_data does not change as the original value was wrong
itself to start with.

4)
+        Returns information about the sequence. <literal>last_value</literal>
+        is the current value of the sequence, <literal>is_called</literal>

It looks odd to say that 'last_value is the current value of the
sequence'. Why don't we name it curr_val? If this is an existing
function and thus we do not want to change the name, then we can say
something on the line that 'last sequence value set in sequence by
nextval or setval' or something
similar to what pg_sequences says for last_value.

5)
+        and <literal>page_lsn</literal> is the page LSN of the sequence
+        relation.

Is the page_lsn the page lsn of sequence relation or lsn of the last
WAL record written (or in simpler terms that particular record's
page_lsn)? If it is relation page-lsn, it should not change.

6)
I have noticed that when I do nextval, logcnt reduces and page_lsn is
not changed until it crosses the threshold. This is in context of
output returned by pg_get_sequence_data. But on doing setval, page_lsn
changes everytime and logcnt is reset to 0. Is this expected behaviour
or the issue in output of pg_get_sequence_data()? I did not get this
information from setval's doc. Can you please review and confirm?

postgres=# SELECT nextval('myseq2');
nextval
---------
155
postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
155 | t | 28 | 0/017C4498

postgres=# SELECT nextval('myseq2');
nextval
---------
175

postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
175 | t | 27 | 0/017C4498

postgres=# SELECT setval('myseq2', 55, true);
setval
--------
55

postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
55 | t | 0 | 0/017C4568

7)
For an all-seq publication, we see this:

Owner | All tables | All sequences | Inserts | Updates | Deletes |
Truncates | Generated columns | Via root
--------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
shveta | f | t | t | t | t | t
| none | f
(1 row)

I feel Inserts, Updates, Deletes and Truncates -- all should be marked
as 'f' instead of default 't'. If we look at the doc of
pg_publication, it points to DML operations of these pages while
explaining pubinsert, pubupdate etc. These DML operations have no
meaning for sequences, thus it makes more sense to make these as 'f'
for sequences. Thoughts?

8)
In pg_publication doc, we shall have a NOTE mentioning that pubinsert,
pubupdate, pubdelete, pubtruncate are not applicable to sequences and
thus will always be false for an all-seq publication. For an all
table, all seq publication; these fields will reflect values for
tables alone.

9)
GetAllTablesPublicationRelations

Earlier we had this name because the publication was for 'ALL TABLES',
but now it could be ALL SEQUNECES too. We shall rename this function.
Some options are: GetAllPublicationRelations,
GetPublicationRelationsForAll,
GetPublicationRelationsForAllTablesSequences

thanks
Shveta

#294vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#291)
6 attachment(s)
Re: Logical Replication of sequences

On Wed, 6 Aug 2025 at 16:29, shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Aug 6, 2025 at 2:28 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20250806 version patch has the changes for the same.

Thank You for the patches. Please find a few comments:

1)
* If 'resync_all_sequences' is false:
* Add or remove tables and sequences that have been added to or removed
* from the publication since the last subscription creation or refresh.
* If 'resync_all_sequences' is true:
* Perform the above operation only for sequences.

Shall we update:
Perform the above operation only for sequences and resync all the
sequences including existing ones.

Modified

2)
XLogRecPtr srsublsn BKI_FORCE_NULL; /* remote LSN of the
state change

Shall we rename it to srremotelsn or srremlsn? srsublsn gives a
feeling that it is local lsn and should be in sync with the one
displayed by pg_get_sequence_data() locally but that is not the case.

I felt this is an existing column which is also used for tables, the
same table behavior is used for sequences too. Since this is an
existing column which has been used from long time, I prefer not to
change it.

3)
create sequence myseq1 start 1 increment 100;
postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq1');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
1 | f | 0 | 0/017BEF10

postgres=# select sequencename, last_value from pg_sequences;
sequencename | last_value
--------------+------------
myseq1 |

For a fresh sequence created, last_value shown by pg_get_sequence_data
seems wrong. On doging nextval for the first time, last_value shown by
pg_get_sequence_data does not change as the original value was wrong
itself to start with.

This behavior is implemented like that in the base code. I will
analyze more why it was implemented like that and discuss this in the
original thread.

4)
+        Returns information about the sequence. <literal>last_value</literal>
+        is the current value of the sequence, <literal>is_called</literal>

It looks odd to say that 'last_value is the current value of the
sequence'. Why don't we name it curr_val? If this is an existing
function and thus we do not want to change the name, then we can say
something on the line that 'last sequence value set in sequence by
nextval or setval' or something
similar to what pg_sequences says for last_value.

Modified

5)
+        and <literal>page_lsn</literal> is the page LSN of the sequence
+        relation.

Is the page_lsn the page lsn of sequence relation or lsn of the last
WAL record written (or in simpler terms that particular record's
page_lsn)? If it is relation page-lsn, it should not change.

Updated documentation

6)
I have noticed that when I do nextval, logcnt reduces and page_lsn is
not changed until it crosses the threshold. This is in context of
output returned by pg_get_sequence_data. But on doing setval, page_lsn
changes everytime and logcnt is reset to 0. Is this expected behaviour
or the issue in output of pg_get_sequence_data()? I did not get this
information from setval's doc. Can you please review and confirm?

postgres=# SELECT nextval('myseq2');
nextval
---------
155
postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
155 | t | 28 | 0/017C4498

postgres=# SELECT nextval('myseq2');
nextval
---------
175

postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
175 | t | 27 | 0/017C4498

postgres=# SELECT setval('myseq2', 55, true);
setval
--------
55

postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
55 | t | 0 | 0/017C4568

I believe this behavior is expected. In the case of nextval,
PostgreSQL prefetches 32 values in advance and uses the increment_by
setting to serve the next value from this cached range. Since these
values are predictable and already reserved, they don't need to be
WAL-logged individually.
However, with setval, the new value being set is arbitrary and cannot
be assumed to follow the previous sequence. It could be a jump
forward, backward, or even the same value. Because of this
unpredictability, the change must be explicitly WAL-logged to ensure
durability and consistency in case of recovery.

Please find my response for the comments from [1]/messages/by-id/CAJpy0uADzXSyx9YYPB-tuCfNWfGi4__CotQ1T3-q7AwBVCZRrg@mail.gmail.com:

7)
For an all-seq publication, we see this:

Owner | All tables | All sequences | Inserts | Updates | Deletes |
Truncates | Generated columns | Via root
--------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
shveta | f | t | t | t | t | t
| none | f
(1 row)

I feel Inserts, Updates, Deletes and Truncates -- all should be marked
as 'f' instead of default 't'. If we look at the doc of
pg_publication, it points to DML operations of these pages while
explaining pubinsert, pubupdate etc. These DML operations have no
meaning for sequences, thus it makes more sense to make these as 'f'
for sequences. Thoughts?

Modified

8)
In pg_publication doc, we shall have a NOTE mentioning that pubinsert,
pubupdate, pubdelete, pubtruncate are not applicable to sequences and
thus will always be false for an all-seq publication. For an all
table, all seq publication; these fields will reflect values for
tables alone.

I felt this is not required, it is mentioned in the docs that it is
only for tables. ex: If true, INSERT operations are replicated for
tables in the publication.

9)
GetAllTablesPublicationRelations

Earlier we had this name because the publication was for 'ALL TABLES',
but now it could be ALL SEQUNECES too. We shall rename this function.
Some options are: GetAllPublicationRelations,
GetPublicationRelationsForAll,
GetPublicationRelationsForAllTablesSequences

Modified it to GetAllPublicationRelations

Please find my response for the comments from [2]/messages/by-id/CABdArM7aY+u5Fv9KMHp_iX=AEixfDum5e2ixZkWS8YcOt_NO7Q@mail.gmail.com:

patch-0005: sequencesync.c
+ aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+   ACL_UPDATE);
+ if (aclresult != ACLCHECK_OK)
+ aclcheck_error(aclresult,
+    get_relkind_objtype(sequence_rel->rd_rel->relkind),
+    seqname);

I see that the run_as_owner check has been removed from
LogicalRepSyncSequences() and added to copy_sequences() for the
SetSequence() call.

However, IIUC, the same check is also needed in
LogicalRepSyncSequences(). Currently, the sequencesync worker can fail
in the above permission check since user switching doesn’t happen when
run_as_owner is false.

```
ERROR: permission denied for sequence n1
```
Should we add the run_as_owner handling here as well to avoid this?

This check here is not required as this check will be done during the
set sequence. updated it.

The attached v20250813 patch has the changes for the same.
[1]: /messages/by-id/CAJpy0uADzXSyx9YYPB-tuCfNWfGi4__CotQ1T3-q7AwBVCZRrg@mail.gmail.com
[2]: /messages/by-id/CABdArM7aY+u5Fv9KMHp_iX=AEixfDum5e2ixZkWS8YcOt_NO7Q@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250813-0001-Enhance-pg_get_sequence_data-function.patchapplication/octet-stream; name=v20250813-0001-Enhance-pg_get_sequence_data-function.patchDownload
From 462e0d8b28c139b2b9e49b676e9c03908b260721 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250813 1/6] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 48 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..ac77869ca6e 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 451ae6f7f69..f5fa49517cf 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1778,15 +1779,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1801,6 +1803,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1816,11 +1822,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 118d6da1ace..62bcd9d921c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250813-0005-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250813-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 6e7de2ca7f9fc42b51a43ec61c4cbbcd51db3bbb Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 4 Aug 2025 18:51:55 +0530
Subject: [PATCH v20250813 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   9 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 629 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 125 +++-
 src/backend/replication/logical/tablesync.c   |  88 +--
 src/backend/replication/logical/worker.c      |  69 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   9 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 26 files changed, 1265 insertions(+), 167 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 28622e54aaa..383f01f83e2 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -352,7 +352,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 0b05b879ca8..f4da3bc54b6 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index f5fa49517cf..708306b3b1c 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -110,7 +110,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						Form_pg_sequence_data seqdataform,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -941,9 +940,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -994,7 +996,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1011,8 +1013,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1044,7 +1046,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1052,14 +1054,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1068,7 +1070,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1830,6 +1832,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 16ea1ed1fb1..9de23aede3d 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -876,7 +876,8 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
  *     Add or remove tables and sequences that have been added to or removed
  * 	   from the publication since the last subscription creation or refresh.
  * If 'resync_all_sequences' is true:
- *     Perform the above operation only for sequences.
+ *     Perform the above operation only for sequences and resync all the
+ *     sequences including existing ones.
  *
  * Note, this is a common function for handling different REFRESH commands
  * according to the parameter 'resync_all_sequences'
@@ -1030,7 +1031,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				if (resync_all_sequences)
 				{
 					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
-											   InvalidXLogRecPtr);
+											   InvalidXLogRecPtr, false);
 					ereport(DEBUG1,
 							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
 											get_namespace_name(get_rel_namespace(relid)),
@@ -1079,7 +1080,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2012,7 +2013,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 37377f7eb63..60fb14861ab 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -243,19 +243,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -265,7 +264,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -326,6 +325,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -414,7 +414,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -503,8 +504,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -627,13 +636,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -700,7 +709,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -832,6 +841,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -880,7 +908,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1246,7 +1274,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1547,7 +1575,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1587,6 +1615,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..fc8d4e9efef
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,629 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	check_and_launch_sync_worker(nsyncworkers, InvalidOid,
+								 &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It reports sequences that are missing on the publisher, as well as sequences
+ * that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	bool		run_as_owner = MySubscription->runasowner;
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			int64		last_value;
+			bool		is_called;
+			int64		log_cnt;
+			XLogRecPtr	page_lsn;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Relation	sequence_rel;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+
+			/* Get the local sequence */
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!sequence_rel || !HeapTupleIsValid(tup))
+			{
+				elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+					 seqinfo->nspname, seqinfo->seqname);
+
+				batch_skipped_count++;
+				continue;
+			}
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				UserContext ucxt;
+
+				/*
+				 * Make sure that the copy command runs as the sequence owner,
+				 * unless the user has opted out of that behaviour.
+				 */
+				if (!run_as_owner)
+					SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				if (!run_as_owner)
+					RestoreUserContext(&ucxt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn, false);
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+										MySubscription->name,
+										seqinfo->nspname,
+										seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, NoLock);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_skipped_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	List	   *sequences_to_copy = NIL;
+	StringInfoData app_name;
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subrel->srrelid;
+		seq_info->remote_seq_fetched = false;
+		seq_info->seqowner = sequence_rel->rd_rel->relowner;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+	{
+		pfree(seq_info->seqname);
+		pfree(seq_info->nspname);
+		pfree(seq_info);
+	}
+
+	list_free(sequences_to_copy);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..ca5c278cd27 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	logicalrep_reset_seqsync_start_time();
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +102,49 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a worker
+ * available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+check_and_launch_sync_worker(int nsyncworkers, Oid relid,
+							 TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3c777363243..b87413ff267 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				check_and_launch_sync_worker(nsyncworkers, rstate->relid,
+											 &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 089a25d6c71..059ab614dc0 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -674,6 +674,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1214,7 +1219,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1336,7 +1344,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1392,7 +1403,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1458,7 +1472,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1593,7 +1610,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2435,7 +2455,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4082,7 +4105,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5262,7 +5288,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5382,8 +5409,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5486,6 +5513,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5505,14 +5536,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5583,6 +5616,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5595,9 +5632,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..80181825240 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4660e42d775..dce0c2ce108 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 87fd96e0ff5..5e0a5a989c2 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,15 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+	Oid			seqowner;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 7920908395d..85e31e76cbe 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -103,6 +104,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -252,6 +255,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -260,12 +264,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void check_and_launch_sync_worker(int nsyncworkers, Oid relid,
+										 TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -275,11 +283,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -344,15 +353,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b8a89275f13
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 93ad46f33c0..f8777a7009f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250813-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250813-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 06bdf208dc1aa20d3c10237ea136d2a6905216f2 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 23 Jul 2025 11:34:50 +0530
Subject: [PATCH v20250813 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command synchronizes the set of sequences associated with a
subscription based on the sequences currently present in the publication
on the publisher. It also marks the corresponding entries in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  60 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 374 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |   8 +
 src/test/regress/expected/subscription.out  |   8 +-
 src/test/regress/sql/subscription.sql       |   4 +
 15 files changed, 438 insertions(+), 127 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..dc46d24c05d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 60ce2016c7e..28622e54aaa 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -478,7 +478,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -515,7 +517,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -527,8 +530,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -540,12 +557,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -554,6 +581,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -576,9 +606,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 1b3c5a55882..0b05b879ca8 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index faa3650d287..16ea1ed1fb1 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,6 +107,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -715,6 +717,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -726,9 +734,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -743,6 +748,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -755,16 +764,19 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -773,7 +785,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				CheckSubscriptionRelkind(get_rel_relkind(relid),
 										 rv->schemaname, rv->relname);
 
-				AddSubscriptionRelState(subid, relid, table_state,
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -781,6 +793,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -804,7 +821,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -843,18 +860,49 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'resync_all_sequences' is false:
+ *     Add or remove tables and sequences that have been added to or removed
+ * 	   from the publication since the last subscription creation or refresh.
+ * If 'resync_all_sequences' is true:
+ *     Perform the above operation only for sequences.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -862,9 +910,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
+	bool		refresh_tables = !resync_all_sequences;
 
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
@@ -885,16 +935,22 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names,
+								   fetch_sequence_list(wrconn,
+													   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -907,22 +963,19 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->retaindeadtuples, sub->origin,
+									  subrel_local_oids, subrel_count,
+									  sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -931,12 +984,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -947,28 +1001,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -990,41 +1064,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1033,10 +1121,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1050,11 +1138,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1538,8 +1628,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1553,7 +1643,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, false);
 				}
 
 				break;
@@ -1593,8 +1683,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1612,18 +1702,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1635,8 +1725,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1648,12 +1738,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, false);
+
+				break;
+			}
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, true);
 
 				break;
 			}
@@ -1931,7 +2035,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2246,16 +2350,16 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * 	  statements with "retain_dead_tuples = true" and "origin = any", and for
+ * 	  ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
  *    when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
@@ -2314,24 +2418,28 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2611,6 +2719,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 68184f5d671..2005884e03c 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 740cc910870..9cefecf1da1 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10983,11 +10983,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 10f836156aa..c3e7cbcba3f 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62bcd9d921c..4660e42d775 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9a223b8076a..87fd96e0ff5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..c2e9583cdb7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index a98c97f7616..0042d0b0f07 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,11 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
+END;
+BEGIN;
+ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION SEQUENCES;
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/regress/sql/subscription.sql b/src/test/regress/sql/subscription.sql
index f0f714fe747..4ace5f4fa95 100644
--- a/src/test/regress/sql/subscription.sql
+++ b/src/test/regress/sql/subscription.sql
@@ -240,6 +240,10 @@ BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
 END;
 
+BEGIN;
+ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION SEQUENCES;
+END;
+
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
 SELECT func();
-- 
2.43.0

v20250813-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250813-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 1e4a0e2f72a715c13d7d29ca3cfe40bc0aebaa27 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250813 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 122 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 565 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 792 insertions(+), 382 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 803c26ab216..1f4a423d3dc 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -82,7 +82,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool  def_pub_action)
 {
 	ListCell   *lc;
 
@@ -91,10 +92,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -842,17 +843,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -879,13 +886,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -918,7 +947,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -996,10 +1025,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a
+		 * FOR ALL SEQUENCES publication. If the publication includes tables
+		 * as well, issue a warning.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1440,6 +1489,7 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1502,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -2019,19 +2076,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index db43034b9db..740cc910870 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10702,7 +10708,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10722,13 +10733,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10840,6 +10852,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19615,6 +19649,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index fc7a6639163..ad45e377add 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4477,6 +4477,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4507,9 +4508,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4525,6 +4531,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4545,6 +4552,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4596,52 +4605,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index dde85ed156c..75e52e2a1ac 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index e7a2d64f741..75f1e64eb02 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3354,6 +3354,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7a06af48842..6c8563fa4a4 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 8b10f2313f3..10f836156aa 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3582,11 +3582,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 53268059142..c7c8b9e1262 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  WITH clause parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +336,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +354,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +386,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +402,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +421,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +432,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +481,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +599,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +894,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1087,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1298,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1341,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1415,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1424,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1437,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1466,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1492,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1563,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1574,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1595,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1619,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1630,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1641,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1652,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1683,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1695,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1777,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1798,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1932,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1954,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index deddf0da844..d77bbc973f1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e6f2e93b2d6..f9bdc1615e6 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2351,6 +2351,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250813-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250813-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From ad73b7c89d181c7a23b7117586a6e9ea005504e1 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250813 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 244acf52f36..60ce2016c7e 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -504,13 +504,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index cd0e19176fd..d12414cbabc 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index d3356bc84ee..3c777363243 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 0fdc5de57ba..089a25d6c71 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1215,7 +1215,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1337,7 +1337,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1393,7 +1393,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1459,7 +1459,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1594,7 +1594,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2436,7 +2436,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4083,7 +4083,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5555,7 +5555,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ad45e377add..7195e28a40f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5247,7 +5247,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5305,7 +5305,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f458447a0e5..9a223b8076a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 7c0204dd6f4..7920908395d 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -248,6 +248,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -271,9 +273,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index f9bdc1615e6..93ad46f33c0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2915,7 +2915,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250813-0006-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250813-0006-Documentation-for-sequence-synchronization.patchDownload
From 7fb3c437a7165b6b776a533267685e334a779349 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250813 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  61 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 465 insertions(+), 64 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index da8a7882580..2e0bedf9c6f 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8161,16 +8161,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8204,7 +8207,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8213,12 +8216,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 20ccb2d6b54..80dc1d785a4 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5190,9 +5190,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5333,8 +5333,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5357,10 +5357,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a0761cfee3f..f06feeab1f8 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,223 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, an ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are either recreated on
+    the publisher, dropped on the subscriber, or removed from the
+    synchronization list. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this issue, either recreate the missing sequence on the
+    publisher using <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>,
+    drop the sequences on the subscriber if they are no longer needed using
+    <link linkend="sql-dropsequence"><command>DROP SEQUENCE</command></link>,
+    or run <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+    remove these sequences from synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2309,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2645,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2659,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index d48cdc76bd3..cdfe1373cd8 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -139,9 +141,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 247c5bd2604..10a67288b39 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#295Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#294)
RE: Logical Replication of sequences

Dear Vignesh,

Thanks for updating the patch. Here are my small comments:

01.
Per pgindent report, publicationcmds.c should be fixed.

02.
```
+       ScanKeyInit(&skey[1],
+                               Anum_pg_subscription_rel_srsubstate,
+                               BTEqualStrategyNumber, F_CHARNE,
+                               CharGetDatum(SUBREL_STATE_READY));
```

I felt it is more natural to "srsubstate = 'i'", instead of "srsubstate <> 'r'"

03.
```
+               table_close(sequence_rel, NoLock);
+       }
+
+       /* Cleanup */
+       systable_endscan(scan);
+       table_close(rel, AccessShareLock);
+
+       CommitTransactionCommand();
```

To clarify, can we release the sequence at the end of the inner loop?

I found that sequence relation is closed (but not release the lock) then commit
the transaction once. This approach cannot avoid dropping it by concurrent
transactions, but maybe you did due to the performance reason. So...I felt we
may able to release bit earlier.

04.
```
+                       sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+
+                       /* Get the local sequence */
+                       tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+                       if (!sequence_rel || !HeapTupleIsValid(tup))
+                       {
+                               elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+                                        seqinfo->nspname, seqinfo->seqname);
+
+                               batch_skipped_count++;
+                               continue;
+                       }
```

a. Code comment can be atop try_table_open().
b. Isn't it enough to check HeapTupleIsValid() here?

05.
```
+                       /* Update the sequence only if the parameters are identical */
+                       if (seqform->seqtypid == seqtypid &&
+                               seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+                               seqform->seqcycle == seqcycle &&
+                               seqform->seqstart == seqstart &&
+                               seqform->seqincrement == seqincrement)
```

I noticed that seqcache is not compared. Is there a reason?

06.
```
+       foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+       {
+               pfree(seq_info->seqname);
+               pfree(seq_info->nspname);
+               pfree(seq_info);
+       }
```

Per comment atop foreach_delete_current(), we should not directly do pfree()
the entry. Can you use foreach_delete_current()? I.e.,

07.
```
foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
{
pfree(seq_info->seqname);
pfree(seq_info->nspname);

sequences_to_copy =
foreach_delete_current(sequences_to_copy, seq_info);
}
```

08.
```
+$node_subscriber->init(allows_streaming => 'logical');
```

Actually no need to set to 'logical'.

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#296Masahiko Sawada
sawada.mshk@gmail.com
In reply to: vignesh C (#294)
Re: Logical Replication of sequences

Hi,

On Wed, Aug 13, 2025 at 3:57 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 6 Aug 2025 at 16:29, shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Aug 6, 2025 at 2:28 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20250806 version patch has the changes for the same.

Thank You for the patches. Please find a few comments:

1)
* If 'resync_all_sequences' is false:
* Add or remove tables and sequences that have been added to or removed
* from the publication since the last subscription creation or refresh.
* If 'resync_all_sequences' is true:
* Perform the above operation only for sequences.

Shall we update:
Perform the above operation only for sequences and resync all the
sequences including existing ones.

Modified

2)
XLogRecPtr srsublsn BKI_FORCE_NULL; /* remote LSN of the
state change

Shall we rename it to srremotelsn or srremlsn? srsublsn gives a
feeling that it is local lsn and should be in sync with the one
displayed by pg_get_sequence_data() locally but that is not the case.

I felt this is an existing column which is also used for tables, the
same table behavior is used for sequences too. Since this is an
existing column which has been used from long time, I prefer not to
change it.

3)
create sequence myseq1 start 1 increment 100;
postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq1');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
1 | f | 0 | 0/017BEF10

postgres=# select sequencename, last_value from pg_sequences;
sequencename | last_value
--------------+------------
myseq1 |

For a fresh sequence created, last_value shown by pg_get_sequence_data
seems wrong. On doging nextval for the first time, last_value shown by
pg_get_sequence_data does not change as the original value was wrong
itself to start with.

This behavior is implemented like that in the base code. I will
analyze more why it was implemented like that and discuss this in the
original thread.

4)
+        Returns information about the sequence. <literal>last_value</literal>
+        is the current value of the sequence, <literal>is_called</literal>

It looks odd to say that 'last_value is the current value of the
sequence'. Why don't we name it curr_val? If this is an existing
function and thus we do not want to change the name, then we can say
something on the line that 'last sequence value set in sequence by
nextval or setval' or something
similar to what pg_sequences says for last_value.

Modified

5)
+        and <literal>page_lsn</literal> is the page LSN of the sequence
+        relation.

Is the page_lsn the page lsn of sequence relation or lsn of the last
WAL record written (or in simpler terms that particular record's
page_lsn)? If it is relation page-lsn, it should not change.

Updated documentation

6)
I have noticed that when I do nextval, logcnt reduces and page_lsn is
not changed until it crosses the threshold. This is in context of
output returned by pg_get_sequence_data. But on doing setval, page_lsn
changes everytime and logcnt is reset to 0. Is this expected behaviour
or the issue in output of pg_get_sequence_data()? I did not get this
information from setval's doc. Can you please review and confirm?

postgres=# SELECT nextval('myseq2');
nextval
---------
155
postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
155 | t | 28 | 0/017C4498

postgres=# SELECT nextval('myseq2');
nextval
---------
175

postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
175 | t | 27 | 0/017C4498

postgres=# SELECT setval('myseq2', 55, true);
setval
--------
55

postgres=# select last_value, is_called, log_cnt, page_lsn from
pg_get_sequence_data('myseq2');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
55 | t | 0 | 0/017C4568

I believe this behavior is expected. In the case of nextval,
PostgreSQL prefetches 32 values in advance and uses the increment_by
setting to serve the next value from this cached range. Since these
values are predictable and already reserved, they don't need to be
WAL-logged individually.
However, with setval, the new value being set is arbitrary and cannot
be assumed to follow the previous sequence. It could be a jump
forward, backward, or even the same value. Because of this
unpredictability, the change must be explicitly WAL-logged to ensure
durability and consistency in case of recovery.

Please find my response for the comments from [1]:

7)
For an all-seq publication, we see this:

Owner | All tables | All sequences | Inserts | Updates | Deletes |
Truncates | Generated columns | Via root
--------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
shveta | f | t | t | t | t | t
| none | f
(1 row)

I feel Inserts, Updates, Deletes and Truncates -- all should be marked
as 'f' instead of default 't'. If we look at the doc of
pg_publication, it points to DML operations of these pages while
explaining pubinsert, pubupdate etc. These DML operations have no
meaning for sequences, thus it makes more sense to make these as 'f'
for sequences. Thoughts?

Modified

8)
In pg_publication doc, we shall have a NOTE mentioning that pubinsert,
pubupdate, pubdelete, pubtruncate are not applicable to sequences and
thus will always be false for an all-seq publication. For an all
table, all seq publication; these fields will reflect values for
tables alone.

I felt this is not required, it is mentioned in the docs that it is
only for tables. ex: If true, INSERT operations are replicated for
tables in the publication.

9)
GetAllTablesPublicationRelations

Earlier we had this name because the publication was for 'ALL TABLES',
but now it could be ALL SEQUNECES too. We shall rename this function.
Some options are: GetAllPublicationRelations,
GetPublicationRelationsForAll,
GetPublicationRelationsForAllTablesSequences

Modified it to GetAllPublicationRelations

Please find my response for the comments from [2]:

patch-0005: sequencesync.c
+ aclresult = pg_class_aclcheck(RelationGetRelid(sequence_rel), GetUserId(),
+   ACL_UPDATE);
+ if (aclresult != ACLCHECK_OK)
+ aclcheck_error(aclresult,
+    get_relkind_objtype(sequence_rel->rd_rel->relkind),
+    seqname);

I see that the run_as_owner check has been removed from
LogicalRepSyncSequences() and added to copy_sequences() for the
SetSequence() call.

However, IIUC, the same check is also needed in
LogicalRepSyncSequences(). Currently, the sequencesync worker can fail
in the above permission check since user switching doesn’t happen when
run_as_owner is false.

```
ERROR: permission denied for sequence n1
```
Should we add the run_as_owner handling here as well to avoid this?

This check here is not required as this check will be done during the
set sequence. updated it.

As I understand it, the logical replication of sequences implemented
by these patches shares the same user interface as table replication
(utilizing CREATE PUBLICATION and CREATE SUBSCRIPTION commands for
configuration). However, the underlying replication mechanism totally
differs from table replication. While table replication sends
changesets extracted from WAL records (i.e., changes are applied in
commit LSN order), sequence replication
synchronizes the subscriber's sequences with the publisher's current
state. This raises an interesting theoretical question: In a scenario
where we implement DDL replication (extracting and replicating DDL
statements from WAL records to subscribers, as previously proposed),
how would sequence-related DDL replication interact with the sequence
synchronization mechanism implemented in this patch?

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#297vignesh C
vignesh21@gmail.com
In reply to: Masahiko Sawada (#296)
Re: Logical Replication of sequences

On Sat, 16 Aug 2025 at 14:15, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

As I understand it, the logical replication of sequences implemented
by these patches shares the same user interface as table replication
(utilizing CREATE PUBLICATION and CREATE SUBSCRIPTION commands for
configuration). However, the underlying replication mechanism totally
differs from table replication. While table replication sends
changesets extracted from WAL records (i.e., changes are applied in
commit LSN order), sequence replication
synchronizes the subscriber's sequences with the publisher's current
state. This raises an interesting theoretical question: In a scenario
where we implement DDL replication (extracting and replicating DDL
statements from WAL records to subscribers, as previously proposed),
how would sequence-related DDL replication interact with the sequence
synchronization mechanism implemented in this patch?

The handling of sequence DDL should mirror how we manage table DDL:
1. During CREATE SUBSCRIPTION - Create sequences along with
tables—there’s no issue when initializing them during the initial
sync.
2. During Incremental Synchronization - Treat sequence changes like
table changes:
2.a Creating new sequences: Apply the creation on the subscriber side
when the corresponding WAL record appears.
2.b Dropping sequences: Handle drops in the same way they should
propagate and execute on the subscriber.
2.c. Handling Modifications to Existing Sequences
Sequence DDL changes can lead to two different outcomes:
i) No Conflict - If the change applies cleanly, accept and apply it immediately.
ii) Conflict
An example:
CREATE SEQUENCE s1 MINVALUE 10 MAXVALUE 20;
SELECT nextval('s1') — called several times, advancing the sequence
ALTER SEQUENCE s1 MAXVALUE 12;
-- Error:
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

In such conflict cases, we should consider using setval() with
is_called = false to adjust the sequence safely and avoid errors.

Thoughts?

Regards,
Vignesh

#298vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#295)
6 attachment(s)
Re: Logical Replication of sequences

On Fri, 15 Aug 2025 at 16:46, Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Dear Vignesh,

Thanks for updating the patch. Here are my small comments:

01.
Per pgindent report, publicationcmds.c should be fixed.

Modified

02.
```
+       ScanKeyInit(&skey[1],
+                               Anum_pg_subscription_rel_srsubstate,
+                               BTEqualStrategyNumber, F_CHARNE,
+                               CharGetDatum(SUBREL_STATE_READY));
```

I felt it is more natural to "srsubstate = 'i'", instead of "srsubstate <> 'r'"

Modified

03.
```
+               table_close(sequence_rel, NoLock);
+       }
+
+       /* Cleanup */
+       systable_endscan(scan);
+       table_close(rel, AccessShareLock);
+
+       CommitTransactionCommand();
```

To clarify, can we release the sequence at the end of the inner loop?

I found that sequence relation is closed (but not release the lock) then commit
the transaction once. This approach cannot avoid dropping it by concurrent
transactions, but maybe you did due to the performance reason. So...I felt we
may able to release bit earlier.

Modified

04.
```
+                       sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+
+                       /* Get the local sequence */
+                       tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+                       if (!sequence_rel || !HeapTupleIsValid(tup))
+                       {
+                               elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+                                        seqinfo->nspname, seqinfo->seqname);
+
+                               batch_skipped_count++;
+                               continue;
+                       }
```

a. Code comment can be atop try_table_open().
b. Isn't it enough to check HeapTupleIsValid() here?

Modified

05.
```
+                       /* Update the sequence only if the parameters are identical */
+                       if (seqform->seqtypid == seqtypid &&
+                               seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+                               seqform->seqcycle == seqcycle &&
+                               seqform->seqstart == seqstart &&
+                               seqform->seqincrement == seqincrement)
```

I noticed that seqcache is not compared. Is there a reason?

I felt we could go ahead and set the sequence value even if seqcache
is different unlike the other sequence parameters. That is the reason
I did not compare it. Thoughts?

06.
```
+       foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+       {
+               pfree(seq_info->seqname);
+               pfree(seq_info->nspname);
+               pfree(seq_info);
+       }
```

Per comment atop foreach_delete_current(), we should not directly do pfree()
the entry. Can you use foreach_delete_current()? I.e.,

Modified

07.
```
foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
{
pfree(seq_info->seqname);
pfree(seq_info->nspname);

sequences_to_copy =
foreach_delete_current(sequences_to_copy, seq_info);
}
```

Modified

08.
```
+$node_subscriber->init(allows_streaming => 'logical');
```

Actually no need to set to 'logical'.

Modified

Thanks for the comments, the updated version has the changes for the same.

Regards,
Vignesh

Attachments:

v20250818-0001-Enhance-pg_get_sequence_data-function.patchapplication/octet-stream; name=v20250818-0001-Enhance-pg_get_sequence_data-function.patchDownload
From e6b362ae2717451641dd2a53c94e4d125f1dd2dd Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250818 1/6] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 48 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..ac77869ca6e 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a3c8cff97b0..467947dce8b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 118d6da1ace..62bcd9d921c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250818-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250818-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From cec7493b60e15f40d27cf310d438175cadfa684d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250818 2/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 122 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 565 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 792 insertions(+), 382 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 803c26ab216..529bcf10d76 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -82,7 +82,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -91,10 +92,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -842,17 +843,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -879,13 +886,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a warning.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -918,7 +947,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -996,10 +1025,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a warning.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1440,6 +1489,7 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1502,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -2019,19 +2076,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index db43034b9db..740cc910870 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10702,7 +10708,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10722,13 +10733,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10840,6 +10852,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19615,6 +19649,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index fc7a6639163..ad45e377add 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4477,6 +4477,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4507,9 +4508,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4525,6 +4531,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4545,6 +4552,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4596,52 +4605,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index dde85ed156c..75e52e2a1ac 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index e7a2d64f741..75f1e64eb02 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3354,6 +3354,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7a06af48842..6c8563fa4a4 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 8b10f2313f3..10f836156aa 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3582,11 +3582,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 53268059142..c7c8b9e1262 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  WITH clause parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +336,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +354,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +386,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +402,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +421,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +432,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +481,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +599,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +894,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1087,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1298,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1341,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1415,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1424,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1437,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1466,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1492,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1563,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1574,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1595,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1619,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1630,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1641,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1652,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1683,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1695,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1777,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1798,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1932,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1954,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index deddf0da844..d77bbc973f1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e6f2e93b2d6..f9bdc1615e6 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2351,6 +2351,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250818-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250818-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 55803619e46d04f322e44d9d28c815275eb0e864 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 23 Jul 2025 11:34:50 +0530
Subject: [PATCH v20250818 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command synchronizes the set of sequences associated with a
subscription based on the sequences currently present in the publication
on the publisher. It also marks the corresponding entries in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  60 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 374 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |  11 +-
 src/backend/replication/logical/syncutils.c |   5 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   3 +-
 src/test/regress/expected/rules.out         |   8 +
 src/test/regress/expected/subscription.out  |   8 +-
 src/test/regress/sql/subscription.sql       |   4 +
 15 files changed, 438 insertions(+), 127 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..dc46d24c05d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 60ce2016c7e..28622e54aaa 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -478,7 +478,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -515,7 +517,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -527,8 +530,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -540,12 +557,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -554,6 +581,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -576,9 +606,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 1b3c5a55882..0b05b879ca8 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index faa3650d287..16ea1ed1fb1 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,6 +107,7 @@ typedef struct SubOpts
 } SubOpts;
 
 static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -715,6 +717,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -726,9 +734,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -743,6 +748,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -755,16 +764,19 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_table_list(wrconn, publications);
+			has_tables = relations != NIL;
+			relations = list_concat(relations,
+									fetch_sequence_list(wrconn, publications));
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
@@ -773,7 +785,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				CheckSubscriptionRelkind(get_rel_relkind(relid),
 										 rv->schemaname, rv->relname);
 
-				AddSubscriptionRelState(subid, relid, table_state,
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -781,6 +793,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -804,7 +821,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -843,18 +860,49 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	return myself;
 }
 
+/*
+ * Update the subscription to refresh both the publication and the publication
+ * objects associated with the subscription.
+ *
+ * Parameters:
+ *
+ * If 'copy_data' is true, the function will set the state to INIT; otherwise,
+ * it will set the state to READY.
+ *
+ * If 'validate_publications' is provided with a publication list, the
+ * function checks that the specified publications exist on the publisher.
+ *
+ * If 'resync_all_sequences' is false:
+ *     Add or remove tables and sequences that have been added to or removed
+ * 	   from the publication since the last subscription creation or refresh.
+ * If 'resync_all_sequences' is true:
+ *     Perform the above operation only for sequences.
+ *
+ * Note, this is a common function for handling different REFRESH commands
+ * according to the parameter 'resync_all_sequences'
+ *
+ * 1. ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *    (when parameter resync_all_sequences is true)
+ *
+ *    The function will mark all sequences with INIT state.
+ *
+ * 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION [WITH (copy_data=true|false)]
+ *    (when parameter resync_all_sequences is false)
+ *
+ *    The function will update only the newly added tables and/or sequences
+ *    based on the copy_data parameter.
+ */
 static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-						  List *validate_publications)
+						  List *validate_publications, bool resync_all_sequences)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -862,9 +910,11 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
+	bool		refresh_tables = !resync_all_sequences;
 
 	/* Load the library providing us libpq calls. */
 	load_file("libpqwalreceiver", false);
@@ -885,16 +935,22 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			check_publications(wrconn, validate_publications);
 
 		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		if (refresh_tables)
+			pubrel_names = fetch_table_list(wrconn, sub->publications);
+
+		/* Get the sequence list from publisher. */
+		pubrel_names = list_concat(pubrel_names,
+								   fetch_sequence_list(wrconn,
+													   sub->publications));
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, refresh_tables, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -907,22 +963,19 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		qsort(subrel_local_oids, subrel_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		if (refresh_tables)
+			check_publications_origin(wrconn, sub->publications, copy_data,
+									  sub->retaindeadtuples, sub->origin,
+									  subrel_local_oids, subrel_count,
+									  sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -931,12 +984,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -947,28 +1001,48 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
-			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			if (bsearch(&relid, pubrel_local_oids,
+						list_length(pubrel_names), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * The resync_all_sequences flag will only be set to true for
+				 * the REFRESH PUBLICATION SEQUENCES command, indicating that
+				 * the existing sequences need to be re-synchronized by
+				 * resetting the relation to its initial state.
+				 */
+				if (resync_all_sequences)
+				{
+					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+											   InvalidXLogRecPtr);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+											get_namespace_name(get_rel_namespace(relid)),
+											get_rel_name(relid),
+											sub->name));
+				}
+			}
+			else
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -990,41 +1064,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1033,10 +1121,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1050,11 +1138,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1538,8 +1628,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1553,7 +1643,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = stmt->publication;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  stmt->publication);
+											  stmt->publication, false);
 				}
 
 				break;
@@ -1593,8 +1683,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1612,18 +1702,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 					sub->publications = publist;
 
 					AlterSubscription_refresh(sub, opts.copy_data,
-											  validate_publications);
+											  validate_publications, false);
 				}
 
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1635,8 +1725,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1648,12 +1738,26 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
+
+				AlterSubscription_refresh(sub, opts.copy_data, NULL, false);
+
+				break;
+			}
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
 
-				AlterSubscription_refresh(sub, opts.copy_data, NULL);
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES");
+
+				AlterSubscription_refresh(sub, true, NULL, true);
 
 				break;
 			}
@@ -1931,7 +2035,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2246,16 +2350,16 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * 	  statements with "retain_dead_tuples = true" and "origin = any", and for
+ * 	  ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
  *    when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
@@ -2314,24 +2418,28 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2611,6 +2719,68 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	return tablelist;
 }
 
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[2] = {TEXTOID, TEXTOID};
+	List	   *seqlist = NIL;
+	int			server_version = walrcv_server_version(wrconn);
+
+	/* Skip sequence fetch if the publisher is older than version 19 */
+	if (server_version < 190000)
+		return seqlist;
+
+	Assert(list_length(publications) > 0);
+
+	initStringInfo(&cmd);
+
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT s.schemaname, s.sequencename\n"
+						   "FROM pg_catalog.pg_publication_sequences s\n"
+						   "WHERE s.pubname IN (");
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoChar(&cmd, ')');
+
+	res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				errmsg("could not receive list of sequences from the publisher: %s",
+					   res->err));
+
+	/* Process sequences. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *nspname;
+		char	   *relname;
+		bool		isnull;
+		RangeVar   *rv;
+
+		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+		Assert(!isnull);
+
+		rv = makeRangeVar(nspname, relname, -1);
+		seqlist = lappend(seqlist, rv);
+		ExecClearTuple(slot);
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+	walrcv_clear_result(res);
+
+	return seqlist;
+}
+
 /*
  * This is to report the connection failure while dropping replication slots.
  * Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index da0cbf41d6f..aa2ba72a708 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 740cc910870..9cefecf1da1 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10983,11 +10983,20 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 10f836156aa..c3e7cbcba3f 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62bcd9d921c..4660e42d775 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9a223b8076a..87fd96e0ff5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..c2e9583cdb7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,8 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index a98c97f7616..0042d0b0f07 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,11 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
+END;
+BEGIN;
+ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION SEQUENCES;
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
diff --git a/src/test/regress/sql/subscription.sql b/src/test/regress/sql/subscription.sql
index f0f714fe747..4ace5f4fa95 100644
--- a/src/test/regress/sql/subscription.sql
+++ b/src/test/regress/sql/subscription.sql
@@ -240,6 +240,10 @@ BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
 END;
 
+BEGIN;
+ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION SEQUENCES;
+END;
+
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
 SELECT func();
-- 
2.43.0

v20250818-0005-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250818-0005-New-worker-for-sequence-synchronization-du.patchDownload
From 55f90cff5a3cf0579eb3b8fe65cd6d43ba054051 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 18 Aug 2025 09:05:38 +0530
Subject: [PATCH v20250818 5/6] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   9 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 630 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 125 +++-
 src/backend/replication/logical/tablesync.c   |  88 +--
 src/backend/replication/logical/worker.c      |  69 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   9 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 26 files changed, 1266 insertions(+), 167 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 28622e54aaa..383f01f83e2 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -352,7 +352,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 0b05b879ca8..f4da3bc54b6 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 467947dce8b..d07b718fd83 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 16ea1ed1fb1..9de23aede3d 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -876,7 +876,8 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
  *     Add or remove tables and sequences that have been added to or removed
  * 	   from the publication since the last subscription creation or refresh.
  * If 'resync_all_sequences' is true:
- *     Perform the above operation only for sequences.
+ *     Perform the above operation only for sequences and resync all the
+ *     sequences including existing ones.
  *
  * Note, this is a common function for handling different REFRESH commands
  * according to the parameter 'resync_all_sequences'
@@ -1030,7 +1031,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				if (resync_all_sequences)
 				{
 					UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
-											   InvalidXLogRecPtr);
+											   InvalidXLogRecPtr, false);
 					ereport(DEBUG1,
 							errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
 											get_namespace_name(get_rel_namespace(relid)),
@@ -1079,7 +1080,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2012,7 +2013,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 37377f7eb63..60fb14861ab 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -243,19 +243,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -265,7 +264,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -326,6 +325,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -414,7 +414,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -503,8 +504,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -627,13 +636,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -700,7 +709,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -832,6 +841,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -880,7 +908,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1246,7 +1274,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1547,7 +1575,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1587,6 +1615,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..7ee9e2da88c
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,630 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	check_and_launch_sync_worker(nsyncworkers, InvalidOid,
+								 &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It reports sequences that are missing on the publisher, as well as sequences
+ * that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	bool		run_as_owner = MySubscription->runasowner;
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			int64		last_value;
+			bool		is_called;
+			int64		log_cnt;
+			XLogRecPtr	page_lsn;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Relation	sequence_rel;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+			{
+				elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+					 seqinfo->nspname, seqinfo->seqname);
+
+				batch_skipped_count++;
+				continue;
+			}
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				UserContext ucxt;
+
+				/*
+				 * Make sure that the copy command runs as the sequence owner,
+				 * unless the user has opted out of that behaviour.
+				 */
+				if (!run_as_owner)
+					SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				if (!run_as_owner)
+					RestoreUserContext(&ucxt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn, false);
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+										MySubscription->name,
+										seqinfo->nspname,
+										seqinfo->seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 seqinfo->nspname, seqinfo->seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, NoLock);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_skipped_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	List	   *sequences_to_copy = NIL;
+	StringInfoData app_name;
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(SUBREL_STATE_INIT));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subrel->srrelid;
+		seq_info->remote_seq_fetched = false;
+		seq_info->seqowner = sequence_rel->rd_rel->relowner;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+	{
+		pfree(seq_info->seqname);
+		pfree(seq_info->nspname);
+
+		sequences_to_copy = foreach_delete_current(sequences_to_copy, seq_info);
+	}
+
+	list_free(sequences_to_copy);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..ca5c278cd27 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	logicalrep_reset_seqsync_start_time();
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +102,49 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a worker
+ * available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+check_and_launch_sync_worker(int nsyncworkers, Oid relid,
+							 TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3c777363243..b87413ff267 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				check_and_launch_sync_worker(nsyncworkers, rstate->relid,
+											 &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 1f233725b00..bc6b3e01d84 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -674,6 +674,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1214,7 +1219,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1336,7 +1344,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1392,7 +1403,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1458,7 +1472,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1593,7 +1610,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2435,7 +2455,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4082,7 +4105,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5262,7 +5288,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5382,8 +5409,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5486,6 +5513,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5505,14 +5536,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5583,6 +5616,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5595,9 +5632,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..80181825240 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4660e42d775..dce0c2ce108 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 87fd96e0ff5..5e0a5a989c2 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,15 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+	Oid			seqowner;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 7920908395d..85e31e76cbe 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -103,6 +104,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -252,6 +255,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -260,12 +264,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void check_and_launch_sync_worker(int nsyncworkers, Oid relid,
+										 TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -275,11 +283,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -344,15 +353,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..580da809114
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION SEQUENCES will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 93ad46f33c0..f8777a7009f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250818-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250818-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 72d9267b823e6a70524d6b6267a33eb4796f573b Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250818 3/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 244acf52f36..60ce2016c7e 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -504,13 +504,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index cd0e19176fd..d12414cbabc 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index d3356bc84ee..3c777363243 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 8e343873454..1f233725b00 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1215,7 +1215,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1337,7 +1337,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1393,7 +1393,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1459,7 +1459,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1594,7 +1594,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2436,7 +2436,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4083,7 +4083,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5555,7 +5555,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ad45e377add..7195e28a40f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5247,7 +5247,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5305,7 +5305,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f458447a0e5..9a223b8076a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 7c0204dd6f4..7920908395d 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -248,6 +248,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -271,9 +273,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index f9bdc1615e6..93ad46f33c0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2915,7 +2915,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250818-0006-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250818-0006-Documentation-for-sequence-synchronization.patchDownload
From da536d2b47ca65bed4352d4b4231ae197f22ff45 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250818 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  61 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 465 insertions(+), 64 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index da8a7882580..2e0bedf9c6f 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8161,16 +8161,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8204,7 +8207,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8213,12 +8216,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 20ccb2d6b54..80dc1d785a4 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5190,9 +5190,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5333,8 +5333,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5357,10 +5357,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index a0761cfee3f..f06feeab1f8 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,223 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, an ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are either recreated on
+    the publisher, dropped on the subscriber, or removed from the
+    synchronization list. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this issue, either recreate the missing sequence on the
+    publisher using <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>,
+    drop the sequences on the subscriber if they are no longer needed using
+    <link linkend="sql-dropsequence"><command>DROP SEQUENCE</command></link>,
+    or run <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+    remove these sequences from synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2309,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2645,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2659,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index d48cdc76bd3..cdfe1373cd8 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -67,6 +68,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
 
   <para>
    Commands <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command>,
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command>,
    <command>ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...</command>
    with <literal>refresh</literal> option as <literal>true</literal>,
    <command>ALTER SUBSCRIPTION ... SET (failover = true|false)</command> and
@@ -139,9 +141,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +160,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +223,28 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Fetch missing sequence information from the publisher, then re-synchronize
+      sequence data with the publisher. Unlike <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 247c5bd2604..10a67288b39 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#299Masahiko Sawada
sawada.mshk@gmail.com
In reply to: vignesh C (#297)
Re: Logical Replication of sequences

On Mon, Aug 18, 2025 at 2:13 AM vignesh C <vignesh21@gmail.com> wrote:

On Sat, 16 Aug 2025 at 14:15, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

As I understand it, the logical replication of sequences implemented
by these patches shares the same user interface as table replication
(utilizing CREATE PUBLICATION and CREATE SUBSCRIPTION commands for
configuration). However, the underlying replication mechanism totally
differs from table replication. While table replication sends
changesets extracted from WAL records (i.e., changes are applied in
commit LSN order), sequence replication
synchronizes the subscriber's sequences with the publisher's current
state. This raises an interesting theoretical question: In a scenario
where we implement DDL replication (extracting and replicating DDL
statements from WAL records to subscribers, as previously proposed),
how would sequence-related DDL replication interact with the sequence
synchronization mechanism implemented in this patch?

The handling of sequence DDL should mirror how we manage table DDL:
1. During CREATE SUBSCRIPTION - Create sequences along with
tables—there’s no issue when initializing them during the initial
sync.
2. During Incremental Synchronization - Treat sequence changes like
table changes:
2.a Creating new sequences: Apply the creation on the subscriber side
when the corresponding WAL record appears.
2.b Dropping sequences: Handle drops in the same way they should
propagate and execute on the subscriber.
2.c. Handling Modifications to Existing Sequences
Sequence DDL changes can lead to two different outcomes:
i) No Conflict - If the change applies cleanly, accept and apply it immediately.
ii) Conflict
An example:
CREATE SEQUENCE s1 MINVALUE 10 MAXVALUE 20;
SELECT nextval('s1') — called several times, advancing the sequence
ALTER SEQUENCE s1 MAXVALUE 12;
-- Error:
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

In such conflict cases, we should consider using setval() with
is_called = false to adjust the sequence safely and avoid errors.

Thoughts?

Thank you for the explanation.

IIUC even with DDL replication support for sequences, users would
still need to manage the order of DDL operations for sequences and
their synchronization (specifically when executing the REFRESH
PUBLICATION [SEQUENCE] command). For example, if a sequence is dropped
on the publisher, the subscriber would encounter synchronization
failures unless the DROP SEQUENCE is properly applied. This potential
issue concerns me.

I recall that Amit initially proposed an approach involving a special
NOOP record to enable the walsender to read and transmit sequence data
to the subscriber[1]/messages/by-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com. Have you considered incorporating this concept
into the current implementation? Under this approach, REFRESH
PUBLICATION [SEQUENCE] would simply trigger the subscriber to write a
special WAL record for sequence synchronization. Subsequently, when
decoding the WAL record, the walsender would collect sequence data
associated with its publications and transmit it to the subscriber.
The apply worker would then process sequence changes in the same
manner as table changes.

We could potentially optimize this process by including the LSN of the
last sequence synchronization in the WAL record, allowing the
walsender to transmit only those sequences whose page LSN exceeds this
value.

This thread is quite long so I may have missed some previous
discussion of these points, so I apologize if these matters have
already been addressed.

Regards,

[1]: /messages/by-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#300Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Masahiko Sawada (#299)
Re: Logical Replication of sequences

On Mon, Aug 18, 2025 at 4:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

For example, if a sequence is dropped
on the publisher, the subscriber would encounter synchronization
failures unless the DROP SEQUENCE is properly applied.

This example is wrong. It seems DROP SEQUENCE works but we might have
problems with ALTER SEQUENCE.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#301vignesh C
vignesh21@gmail.com
In reply to: Masahiko Sawada (#300)
Re: Logical Replication of sequences

On Tue, 19 Aug 2025 at 06:47, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Aug 18, 2025 at 4:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

For example, if a sequence is dropped
on the publisher, the subscriber would encounter synchronization
failures unless the DROP SEQUENCE is properly applied.

This example is wrong. It seems DROP SEQUENCE works but we might have
problems with ALTER SEQUENCE.

I also felt that DROP SEQUENCE does not pose a problem.

When it comes to ALTER SEQUENCE, there are two distinct cases to consider:
Case 1: Parameter Mismatch During REFRESH PUBLICATION SEQUENCES
Example:
-- Publisher
CREATE SEQUENCE s1 MINVALUE 10 MAXVALUE 20;

-- Subscriber
CREATE SEQUENCE s1 MINVALUE 10 MAXVALUE 20;
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;

-- Publisher
ALTER SEQUENCE s1 MAXVALUE 12;

-- Subscriber
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;

In this scenario, the refresh fails with an error because the sequence
parameters no longer match:
2025-08-19 12:41:52.289 IST [209043] ERROR: logical replication
sequence synchronization failed for subscription "sub1"
2025-08-19 12:41:52.289 IST [209043] DETAIL: Mismatched sequence(s)
on subscriber: ("public.s1").
2025-08-19 12:41:52.289 IST [209043] HINT: For mismatched sequences,
alter or re-create local sequences to have matching parameters as
publishers.

In this case, the user simply needs to update the subscriber sequence
definition so that its parameters match the publisher.

Case 2: Sequence value Conflict While Applying DDL Changes(Future patch)

Example:
-- Publisher
CREATE SEQUENCE s1 MINVALUE 10 MAXVALUE 20;
SELECT nextval('s1'); -- called several times, advancing sequence to 14

-- Subscriber
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
SELECT currval('s1');
currval
---------
14

Now on the publisher:
SELECT setval('s1', 11);
ALTER SEQUENCE s1 MAXVALUE 12;

When applying the DDL change on the subscriber:
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

This illustrates a value conflict between the current state of the
sequence on the subscriber and the altered definition from the
publisher.

For such cases, we could consider:
Allowing the user to resolve the conflict manually, or
Providing an option to reset the sequence automatically.

A similar scenario can also occur with tables if a DML operation is
executed on the subscriber.

I’m still not entirely sure which of these scenarios you were referring to.
Were you pointing to Case 2 (value conflict), or do you have another
case in mind?

Regards,
Vignesh

#302Masahiko Sawada
sawada.mshk@gmail.com
In reply to: vignesh C (#301)
Re: Logical Replication of sequences

On Tue, Aug 19, 2025 at 1:44 AM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 19 Aug 2025 at 06:47, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Aug 18, 2025 at 4:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

For example, if a sequence is dropped
on the publisher, the subscriber would encounter synchronization
failures unless the DROP SEQUENCE is properly applied.

This example is wrong. It seems DROP SEQUENCE works but we might have
problems with ALTER SEQUENCE.

I also felt that DROP SEQUENCE does not pose a problem.

When it comes to ALTER SEQUENCE, there are two distinct cases to consider:
Case 1: Parameter Mismatch During REFRESH PUBLICATION SEQUENCES
Example:
-- Publisher
CREATE SEQUENCE s1 MINVALUE 10 MAXVALUE 20;

-- Subscriber
CREATE SEQUENCE s1 MINVALUE 10 MAXVALUE 20;
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;

-- Publisher
ALTER SEQUENCE s1 MAXVALUE 12;

-- Subscriber
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;

In this scenario, the refresh fails with an error because the sequence
parameters no longer match:
2025-08-19 12:41:52.289 IST [209043] ERROR: logical replication
sequence synchronization failed for subscription "sub1"
2025-08-19 12:41:52.289 IST [209043] DETAIL: Mismatched sequence(s)
on subscriber: ("public.s1").
2025-08-19 12:41:52.289 IST [209043] HINT: For mismatched sequences,
alter or re-create local sequences to have matching parameters as
publishers.

In this case, the user simply needs to update the subscriber sequence
definition so that its parameters match the publisher.

Case 2: Sequence value Conflict While Applying DDL Changes(Future patch)

Example:
-- Publisher
CREATE SEQUENCE s1 MINVALUE 10 MAXVALUE 20;
SELECT nextval('s1'); -- called several times, advancing sequence to 14

-- Subscriber
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
SELECT currval('s1');
currval
---------
14

Now on the publisher:
SELECT setval('s1', 11);
ALTER SEQUENCE s1 MAXVALUE 12;

When applying the DDL change on the subscriber:
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

This illustrates a value conflict between the current state of the
sequence on the subscriber and the altered definition from the
publisher.

For such cases, we could consider:
Allowing the user to resolve the conflict manually, or
Providing an option to reset the sequence automatically.

A similar scenario can also occur with tables if a DML operation is
executed on the subscriber.

I’m still not entirely sure which of these scenarios you were referring to.
Were you pointing to Case 2 (value conflict), or do you have another
case in mind?

I imagined something like case 2. For logical replication of tables,
if we support DDL replication (i.e., CREATE/ALTER/DROP TABLE), all
changes the apply worker executes are serialized in commit LSN order.
Therefore, users would not have to be concerned about schema changes
that happened to the publisher. On the other hand, for sequence
replication, even if we support DDL replication for sequences (i.e.,
CREATE/ALTER/DROP SEQUENCES), users would have to execute REFRESH
PUBLICATION SEQUENCES command after "ALTER SEQUENCE s1 MAXVALUE 12;"
has been replicated on the subscriber. Otherwise, REFRESH PUBLICATION
SEQUENCE command would fail because the sequence parameters no longer
match.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#303Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#302)
Re: Logical Replication of sequences

On Tue, Aug 19, 2025 at 11:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Tue, Aug 19, 2025 at 1:44 AM vignesh C <vignesh21@gmail.com> wrote:

Case 2: Sequence value Conflict While Applying DDL Changes(Future patch)

Example:
-- Publisher
CREATE SEQUENCE s1 MINVALUE 10 MAXVALUE 20;
SELECT nextval('s1'); -- called several times, advancing sequence to 14

-- Subscriber
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
SELECT currval('s1');
currval
---------
14

Now on the publisher:
SELECT setval('s1', 11);
ALTER SEQUENCE s1 MAXVALUE 12;

When applying the DDL change on the subscriber:
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

This illustrates a value conflict between the current state of the
sequence on the subscriber and the altered definition from the
publisher.

For such cases, we could consider:
Allowing the user to resolve the conflict manually, or
Providing an option to reset the sequence automatically.

A similar scenario can also occur with tables if a DML operation is
executed on the subscriber.

I’m still not entirely sure which of these scenarios you were referring to.
Were you pointing to Case 2 (value conflict), or do you have another
case in mind?

I imagined something like case 2. For logical replication of tables,
if we support DDL replication (i.e., CREATE/ALTER/DROP TABLE), all
changes the apply worker executes are serialized in commit LSN order.
Therefore, users would not have to be concerned about schema changes
that happened to the publisher. On the other hand, for sequence
replication, even if we support DDL replication for sequences (i.e.,
CREATE/ALTER/DROP SEQUENCES), users would have to execute REFRESH
PUBLICATION SEQUENCES command after "ALTER SEQUENCE s1 MAXVALUE 12;"
has been replicated on the subscriber. Otherwise, REFRESH PUBLICATION
SEQUENCE command would fail because the sequence parameters no longer
match.

In the example provided by Vignesh, it should do REFRESH before the
ALTER SEQUENCE command; otherwise, the ALTER SEQUENCE won't be
replicated, right? If so, I don't think we can do much with the design
choice we made. During DDL replication of sequences, we need to
consider it as a conflict.

BTW, note that the same situation can happen even when the user
manually changed the sequence value on the subscriber in some way. So,
we can't prevent that.

--
With Regards,
Amit Kapila.

#304vignesh C
vignesh21@gmail.com
In reply to: Masahiko Sawada (#302)
Re: Logical Replication of sequences

On Tue, 19 Aug 2025 at 23:33, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I imagined something like case 2. For logical replication of tables,
if we support DDL replication (i.e., CREATE/ALTER/DROP TABLE), all
changes the apply worker executes are serialized in commit LSN order.
Therefore, users would not have to be concerned about schema changes
that happened to the publisher. On the other hand, for sequence
replication, even if we support DDL replication for sequences (i.e.,
CREATE/ALTER/DROP SEQUENCES), users would have to execute REFRESH
PUBLICATION SEQUENCES command after "ALTER SEQUENCE s1 MAXVALUE 12;"
has been replicated on the subscriber. Otherwise, REFRESH PUBLICATION
SEQUENCE command would fail because the sequence parameters no longer
match.

I am summarizing the challenges identified so far (assuming we have
DDL replication implemented through WAL support)
1) Lack of sequence-synchronization resulting in DDL replication
failure/conflict.
On the subscriber, the sequence has advanced to 14:
SELECT currval('s1');
currval
---------
14

On the publisher, the sequence is reset to 11 and MAXVALUE is changed to 12:
SELECT setval('s1', 11);
ALTER SEQUENCE s1 MAXVALUE 12;
If the subscriber did not execute REFRESH PUBLICATION SEQUENCES, DDL
replication will fail with error.
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

2) Manual DDL on subscriber resulting in sequence synchronization failure.
On the subscriber, the sequence maxvalue is changed:
ALTER SEQUENCE s1 MAXVALUE 12;

On the publisher, the sequence has advanced to 14:
SELECT currval('s1');
currval
---------
14

REFRESH PUBLICATION SEQUENCES will fail because setting currvalue to
14 is greater than the changed maxvalue 12 in the subscriber.

3) Out of order DDL and REFRESH resulting in synchronization failure.
Initially we have the same sequence on pub and sub. Then lets say pub
has done parameter change:
ALTER SEQUENCE s1 MAXVALUE 12;
Now if this DDL is somehow not replicated on sub, REFRESH PUBLICATION
SEQUENCES will fail initially and may work once DDL is replicated.
~~
Problems 1 and 2 exist in both designs. While the WAL-based REFRESH
may seem slightly better for Problem 3 since REFRESH on the subscriber
will execute only after prior DDLs are replicated—even with the
sequence-sync worker, this isn't a major issue. If a user triggers
REFRESH before the DDL is replicated, the worker will refresh all
sequences except the mismatched one, and keep restarting and retrying
until the DDL is applied. Once that happens, the sequence sync
completes automatically, without the user doing another REFRESH.
Furthermore, the likelihood of a user executing REFRESH exactly during
the window between the DDL execution on the publisher and its
application on the subscriber seems relatively low.

WAL-based approach OTOH introduces several additional challenges that
may outweigh its potential benefits:
1) Increases load on WAL sender to collect sequence values. We are
talking about all the sequences here which could be huge in number.
2) Table replication may stall until sequence conflicts are resolved.
The chances of hitting any conflict/error could be more here as
compared to tables specially when sequence synchronization is not
incremental and the number of sequences are huge. The continuous and
more frequent errors if not handled by users may even end up
invalidating the slot on primary.

The worker approach neither blocks the apply worker in case of errors
nor adds extra load on the WAL sender. On its own, Case 3 doesn’t seem
significant enough to justify switching to a WAL-based design.
Overall, the worker-based approach appears to be less complex and a
better option.

Regards,
Vignesh

#305Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#298)
Re: Logical Replication of sequences

On Mon, Aug 18, 2025 at 3:36 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, the updated version has the changes for the same.

I wanted to first discuss a few design points. The patch implements
"ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES" such that it
copies the existing sequences values and also adds/removes any missing
sequences. For the second part (add/remove sequences), we already have
a separate command "ALTER SUBSCRIPTION ... REFRESH PUBLICATION". So, I
feel the new command should only copy the sequence values, as that
will keep the interface easy to define and understand. Additionally,
it will help to simplify the code in the patch, especially in the
function AlterSubscription_refresh.

We previously discussed *not* to launch an apply worker if the
corresponding publication(s) only publish sequences. See [1]/messages/by-id/CAA4eK1LcBoPBCKa9yFOQnvpBv3a2ejf_EWC=ZKksGcvqW7e0Zg@mail.gmail.com. We
should consider it again to see if that is a good idea. It will have
some drawbacks as compared to the current approach of doing sync via
sync worker. The command could take time for a large number of
sequences, and on failure, retry won't happen which can happen with
background workers. Additionally, when the connect option is false for
a subscription during creation, the user needs to later call REFRESH
to sync the sequences after enabling the subscription. OTOH, doing the
sync during the command will bring more predictability and simplify
the patch. What do others think?

A few other comments:
1.
If the publication includes tables as well,
+ * issue a warning.
+ */
+ if (!stmt->for_all_tables)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("WITH clause parameters are not supported for publications
defined as FOR ALL SEQUENCES"));
+
+ ereport(NOTICE,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("WITH clause parameters are not applicable to sequence
synchronization and will be ignored"));

Though we are issuing a NOTICE but the comment refers to WARNING.

2.
/*
- * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
- * the list of relation oids that are already present on the subscriber.
- * This check should be skipped for these tables if checking for table
- * sync scenario. However, when handling the retain_dead_tuples scenario,
- * ensure all tables are checked, as some existing tables may now include
- * changes from other origins due to newly created subscriptions on the
- * publisher.
+ * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+ * subrel_local_oids contains the list of relation oids that are already
+ * present on the subscriber. This check should be skipped for these
+ * tables if checking for table sync scenario. However, when handling the
+ * retain_dead_tuples scenario, ensure all tables are checked, as some
+ * existing tables may now include changes from other origins due to newly
+ * created subscriptions on the publisher.

IIUC, this and other similar comments and err_message changes are just
using REFRESH PUBLICATION instead of REFRESH because now we have added
a SEQUENCES alternative as well. If so, let's make this a refactoring
patch for this just before the 0004 patch?

3.
ALTER_SUBSCRIPTION_DROP_PUBLICATION,
- ALTER_SUBSCRIPTION_REFRESH,
+ ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+ ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES,

The length of the new type seems a bit longer. Can we try to slightly
reduce it by using ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ?

4.
+ case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQUENCES:
+ {
+ if (!sub->enabled)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not
allowed for disabled subscriptions"));
- AlterSubscription_refresh(sub, opts.copy_data, NULL);
+ PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ...
REFRESH PUBLICATION SEQUENCES");

Is there a need to restrict this new command in a transaction block?
We restrict other commands because those can lead to a drop of slots
that can't be rolled back whereas the sequencesync doesn't use slots,
so it should be okay to allow this new command in the transaction
block.

5.
static void
 AlterSubscription_refresh(Subscription *sub, bool copy_data,
-   List *validate_publications)
+   List *validate_publications, bool resync_all_sequences)
…
+ if (resync_all_sequences)
+ {
+ UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+    InvalidXLogRecPtr);
…

During refresh, we are re-initializing the sequence state by marking
its previously synced LSN to InvalidXLogRecPtr and relation state as
SUBREL_STATE_INIT. This will lose its previously synced value, and
also changing it to SUBREL_STATE_INIT also doesn't sound intuitive,
even though it serves the purpose. I feel it is better to use
SUBREL_STATE_DATASYNC state as that indicates data is being
synchronized, and let the LSN value be the same as the previous.

[1]: /messages/by-id/CAA4eK1LcBoPBCKa9yFOQnvpBv3a2ejf_EWC=ZKksGcvqW7e0Zg@mail.gmail.com

--
With Regards,
Amit Kapila.

#306Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#303)
Re: Logical Replication of sequences

On Tue, Aug 19, 2025 at 9:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 19, 2025 at 11:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Tue, Aug 19, 2025 at 1:44 AM vignesh C <vignesh21@gmail.com> wrote:

Case 2: Sequence value Conflict While Applying DDL Changes(Future patch)

Example:
-- Publisher
CREATE SEQUENCE s1 MINVALUE 10 MAXVALUE 20;
SELECT nextval('s1'); -- called several times, advancing sequence to 14

-- Subscriber
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
SELECT currval('s1');
currval
---------
14

Now on the publisher:
SELECT setval('s1', 11);
ALTER SEQUENCE s1 MAXVALUE 12;

When applying the DDL change on the subscriber:
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

This illustrates a value conflict between the current state of the
sequence on the subscriber and the altered definition from the
publisher.

For such cases, we could consider:
Allowing the user to resolve the conflict manually, or
Providing an option to reset the sequence automatically.

A similar scenario can also occur with tables if a DML operation is
executed on the subscriber.

I’m still not entirely sure which of these scenarios you were referring to.
Were you pointing to Case 2 (value conflict), or do you have another
case in mind?

I imagined something like case 2. For logical replication of tables,
if we support DDL replication (i.e., CREATE/ALTER/DROP TABLE), all
changes the apply worker executes are serialized in commit LSN order.
Therefore, users would not have to be concerned about schema changes
that happened to the publisher. On the other hand, for sequence
replication, even if we support DDL replication for sequences (i.e.,
CREATE/ALTER/DROP SEQUENCES), users would have to execute REFRESH
PUBLICATION SEQUENCES command after "ALTER SEQUENCE s1 MAXVALUE 12;"
has been replicated on the subscriber. Otherwise, REFRESH PUBLICATION
SEQUENCE command would fail because the sequence parameters no longer
match.

In the example provided by Vignesh, it should do REFRESH before the
ALTER SEQUENCE command; otherwise, the ALTER SEQUENCE won't be
replicated, right?

Not sure. The REFRESH command is specifically used to synchronize
values (such as last_value) of the local sequence to the remote ones,
but this only works when their definitions match. In contrast, DDL
replication for sequences handles changes to the sequence definition
itself. While DDLs are automatically replicated through logical
replication based on WAL records, the REFRESH command requires manual
execution by users. Therefore, I believe ALTER SEQUENCE statements
would be replicated regardless of when users execute the REFRESH
command. This means users would need to carefully consider the
ordering of these operations to prevent potential conflicts.

If so, I don't think we can do much with the design
choice we made. During DDL replication of sequences, we need to
consider it as a conflict.

BTW, note that the same situation can happen even when the user
manually changed the sequence value on the subscriber in some way. So,
we can't prevent that.

Yes, I understand that conflicts can occur when users manually modify
sequence values or parameters on the subscriber. However, in Vignesh's
example, users are only executing the REFRESH command, without
performing any ALTER SEQUENCE commands or setval() operations on the
subscriber. In this scenario, I don't see why conflicts would arise
even with DDL replication in place.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#307Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#306)
Re: Logical Replication of sequences

On Wed, Aug 20, 2025 at 11:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Tue, Aug 19, 2025 at 9:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

If so, I don't think we can do much with the design
choice we made. During DDL replication of sequences, we need to
consider it as a conflict.

BTW, note that the same situation can happen even when the user
manually changed the sequence value on the subscriber in some way. So,
we can't prevent that.

Yes, I understand that conflicts can occur when users manually modify
sequence values or parameters on the subscriber. However, in Vignesh's
example, users are only executing the REFRESH command, without
performing any ALTER SEQUENCE commands or setval() operations on the
subscriber. In this scenario, I don't see why conflicts would arise
even with DDL replication in place.

This is because DDL can also fail if the existing sequence data does
not adhere to the DDL change. This will be true even for tables, but
let's focus on the sequence case. See below part of the example:

-- Subscriber
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
SELECT currval('s1');
currval
---------
14

-- Now on the publisher:
SELECT setval('s1', 11);
ALTER SEQUENCE s1 MAXVALUE 12;

When applying the DDL change on the subscriber:
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

Here the user has intentionally reduced the existing value of the
sequence to (11) on the publisher after the REFRESH command and then
performed a DDL that is compatible with the latest RESTART value (11).
Now, because we did REFRESH before the user set the value of sequence
as 11, the current value on the subscriber will be 14. When we
replicate the DDL, it will find the latest RESTART value as (14)
greater than DDL's changed MAXVALUE (12), so it will fail, and the
subscriber will retry. Users have to manually perform REFRESH once
again, or maybe as part of a conflict resolution strategy, we can do
this internally. IIUC, we can't avoid this even if we start writing
WAL for the REFRESH command on the publisher.

--
With Regards,
Amit Kapila.

#308shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#304)
Re: Logical Replication of sequences

On Wed, Aug 20, 2025 at 2:25 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 19 Aug 2025 at 23:33, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I imagined something like case 2. For logical replication of tables,
if we support DDL replication (i.e., CREATE/ALTER/DROP TABLE), all
changes the apply worker executes are serialized in commit LSN order.
Therefore, users would not have to be concerned about schema changes
that happened to the publisher. On the other hand, for sequence
replication, even if we support DDL replication for sequences (i.e.,
CREATE/ALTER/DROP SEQUENCES), users would have to execute REFRESH
PUBLICATION SEQUENCES command after "ALTER SEQUENCE s1 MAXVALUE 12;"
has been replicated on the subscriber. Otherwise, REFRESH PUBLICATION
SEQUENCE command would fail because the sequence parameters no longer
match.

I am summarizing the challenges identified so far (assuming we have
DDL replication implemented through WAL support)
1) Lack of sequence-synchronization resulting in DDL replication
failure/conflict.
On the subscriber, the sequence has advanced to 14:
SELECT currval('s1');
currval
---------
14

On the publisher, the sequence is reset to 11 and MAXVALUE is changed to 12:
SELECT setval('s1', 11);
ALTER SEQUENCE s1 MAXVALUE 12;
If the subscriber did not execute REFRESH PUBLICATION SEQUENCES, DDL
replication will fail with error.
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

2) Manual DDL on subscriber resulting in sequence synchronization failure.
On the subscriber, the sequence maxvalue is changed:
ALTER SEQUENCE s1 MAXVALUE 12;

On the publisher, the sequence has advanced to 14:
SELECT currval('s1');
currval
---------
14

REFRESH PUBLICATION SEQUENCES will fail because setting currvalue to
14 is greater than the changed maxvalue 12 in the subscriber.

3) Out of order DDL and REFRESH resulting in synchronization failure.
Initially we have the same sequence on pub and sub. Then lets say pub
has done parameter change:
ALTER SEQUENCE s1 MAXVALUE 12;
Now if this DDL is somehow not replicated on sub, REFRESH PUBLICATION
SEQUENCES will fail initially and may work once DDL is replicated.
~~
Problems 1 and 2 exist in both designs. While the WAL-based REFRESH
may seem slightly better for Problem 3 since REFRESH on the subscriber
will execute only after prior DDLs are replicated—even with the
sequence-sync worker, this isn't a major issue. If a user triggers
REFRESH before the DDL is replicated, the worker will refresh all
sequences except the mismatched one, and keep restarting and retrying
until the DDL is applied. Once that happens, the sequence sync
completes automatically, without the user doing another REFRESH.
Furthermore, the likelihood of a user executing REFRESH exactly during
the window between the DDL execution on the publisher and its
application on the subscriber seems relatively low.

WAL-based approach OTOH introduces several additional challenges that
may outweigh its potential benefits:
1) Increases load on WAL sender to collect sequence values. We are
talking about all the sequences here which could be huge in number.
2) Table replication may stall until sequence conflicts are resolved.
The chances of hitting any conflict/error could be more here as
compared to tables specially when sequence synchronization is not
incremental and the number of sequences are huge. The continuous and
more frequent errors if not handled by users may even end up
invalidating the slot on primary.

The worker approach neither blocks the apply worker in case of errors
nor adds extra load on the WAL sender. On its own, Case 3 doesn’t seem
significant enough to justify switching to a WAL-based design.
Overall, the worker-based approach appears to be less complex and a
better option.

Agree on this. Please find a few comments on the previous patch:

1)
+        Returns information about the sequence. <literal>last_value</literal>
+        last sequence value set in sequence by nextval or setval,

<literal>last_value</literal> indicates ....

2)
+ * If 'resync_all_sequences' is true:
+ *     Perform the above operation only for sequences.

Shall we update:
Perform the above operation only for sequences and resync all the
sequences including existing ones.

<old comment, I think somehow missed to be addressed.>

3)
+ check_and_launch_sync_worker(nsyncworkers, InvalidOid,
+ &MyLogicalRepWorker->last_seqsync_start_time);

Shall we simply name it as 'launch_sync_worker'.
'check' looks a little odd. All such functions (ex:
logicalrep_worker_launch) has internal checks but the name need not to
have 'check' keyword

4)
+ * Attempt to launch a sync worker (sequence or table) if there is a worker
+ * available and the retry interval has elapsed.

shall we say:
'if there is a sync worker slot available' instead of 'if there is a
worker available'

5)
copy_sequences:
+ if (!sequence_rel || !HeapTupleIsValid(tup))
+ {
+ elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has
been dropped concurrently",
+ seqinfo->nspname, seqinfo->seqname);
+
+ batch_skipped_count++;
+ continue;
+ }

Is it possible that sequence_rel is valid while tuple is not? If
possible, then do we need table_close before continuing?

6)
In copy_sequences() wherever we are using seqinfo->nspname,
seqinfo->seqname; shall we directly use local vars nspname, seqname.

7)
LogicalRepSyncSequences:
+ /* Skip if sequence was dropped concurrently */
+ sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+ if (!sequence_rel)
+ continue;

Here we are not checking tuple-validity like we did in copy_sequences
(comment 5 above). I think this alone should suffice even in
copy_sequences(). What do you think?

thanks
Shveta

#309vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#308)
7 attachment(s)
Re: Logical Replication of sequences

On Thu, 21 Aug 2025 at 11:49, shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Aug 20, 2025 at 2:25 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 19 Aug 2025 at 23:33, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I imagined something like case 2. For logical replication of tables,
if we support DDL replication (i.e., CREATE/ALTER/DROP TABLE), all
changes the apply worker executes are serialized in commit LSN order.
Therefore, users would not have to be concerned about schema changes
that happened to the publisher. On the other hand, for sequence
replication, even if we support DDL replication for sequences (i.e.,
CREATE/ALTER/DROP SEQUENCES), users would have to execute REFRESH
PUBLICATION SEQUENCES command after "ALTER SEQUENCE s1 MAXVALUE 12;"
has been replicated on the subscriber. Otherwise, REFRESH PUBLICATION
SEQUENCE command would fail because the sequence parameters no longer
match.

I am summarizing the challenges identified so far (assuming we have
DDL replication implemented through WAL support)
1) Lack of sequence-synchronization resulting in DDL replication
failure/conflict.
On the subscriber, the sequence has advanced to 14:
SELECT currval('s1');
currval
---------
14

On the publisher, the sequence is reset to 11 and MAXVALUE is changed to 12:
SELECT setval('s1', 11);
ALTER SEQUENCE s1 MAXVALUE 12;
If the subscriber did not execute REFRESH PUBLICATION SEQUENCES, DDL
replication will fail with error.
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

2) Manual DDL on subscriber resulting in sequence synchronization failure.
On the subscriber, the sequence maxvalue is changed:
ALTER SEQUENCE s1 MAXVALUE 12;

On the publisher, the sequence has advanced to 14:
SELECT currval('s1');
currval
---------
14

REFRESH PUBLICATION SEQUENCES will fail because setting currvalue to
14 is greater than the changed maxvalue 12 in the subscriber.

3) Out of order DDL and REFRESH resulting in synchronization failure.
Initially we have the same sequence on pub and sub. Then lets say pub
has done parameter change:
ALTER SEQUENCE s1 MAXVALUE 12;
Now if this DDL is somehow not replicated on sub, REFRESH PUBLICATION
SEQUENCES will fail initially and may work once DDL is replicated.
~~
Problems 1 and 2 exist in both designs. While the WAL-based REFRESH
may seem slightly better for Problem 3 since REFRESH on the subscriber
will execute only after prior DDLs are replicated—even with the
sequence-sync worker, this isn't a major issue. If a user triggers
REFRESH before the DDL is replicated, the worker will refresh all
sequences except the mismatched one, and keep restarting and retrying
until the DDL is applied. Once that happens, the sequence sync
completes automatically, without the user doing another REFRESH.
Furthermore, the likelihood of a user executing REFRESH exactly during
the window between the DDL execution on the publisher and its
application on the subscriber seems relatively low.

WAL-based approach OTOH introduces several additional challenges that
may outweigh its potential benefits:
1) Increases load on WAL sender to collect sequence values. We are
talking about all the sequences here which could be huge in number.
2) Table replication may stall until sequence conflicts are resolved.
The chances of hitting any conflict/error could be more here as
compared to tables specially when sequence synchronization is not
incremental and the number of sequences are huge. The continuous and
more frequent errors if not handled by users may even end up
invalidating the slot on primary.

The worker approach neither blocks the apply worker in case of errors
nor adds extra load on the WAL sender. On its own, Case 3 doesn’t seem
significant enough to justify switching to a WAL-based design.
Overall, the worker-based approach appears to be less complex and a
better option.

Agree on this. Please find a few comments on the previous patch:

1)
+        Returns information about the sequence. <literal>last_value</literal>
+        last sequence value set in sequence by nextval or setval,

<literal>last_value</literal> indicates ....

Modified

2)
+ * If 'resync_all_sequences' is true:
+ *     Perform the above operation only for sequences.

Shall we update:
Perform the above operation only for sequences and resync all the
sequences including existing ones.

<old comment, I think somehow missed to be addressed.>

This code is removed now

3)
+ check_and_launch_sync_worker(nsyncworkers, InvalidOid,
+ &MyLogicalRepWorker->last_seqsync_start_time);

Shall we simply name it as 'launch_sync_worker'.
'check' looks a little odd. All such functions (ex:
logicalrep_worker_launch) has internal checks but the name need not to
have 'check' keyword

Modified

4)
+ * Attempt to launch a sync worker (sequence or table) if there is a worker
+ * available and the retry interval has elapsed.

shall we say:
'if there is a sync worker slot available' instead of 'if there is a
worker available'

Modified

5)
copy_sequences:
+ if (!sequence_rel || !HeapTupleIsValid(tup))
+ {
+ elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has
been dropped concurrently",
+ seqinfo->nspname, seqinfo->seqname);
+
+ batch_skipped_count++;
+ continue;
+ }

Is it possible that sequence_rel is valid while tuple is not? If
possible, then do we need table_close before continuing?

I felt it is not possible as the lock on sequence has been acquired
successfully.

6)
In copy_sequences() wherever we are using seqinfo->nspname,
seqinfo->seqname; shall we directly use local vars nspname, seqname.

Modified

7)
LogicalRepSyncSequences:
+ /* Skip if sequence was dropped concurrently */
+ sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+ if (!sequence_rel)
+ continue;

Here we are not checking tuple-validity like we did in copy_sequences
(comment 5 above). I think this alone should suffice even in
copy_sequences(). What do you think?

In this case we just need the sequence and schema name which is
available in sequence_rel whereas in case of copy_sequences we need
other sequence parameters like min, max, cycle etc which requires the
tuple. I felt the existing code is ok.

I have also addressed all the comments from [1]/messages/by-id/CAA4eK1+oVQW8oP=Lo1X8qac6dzg-fgGQ6R_F_psfokUEqe+a6w@mail.gmail.com in the attached
v20250823 version patch.
[1]: /messages/by-id/CAA4eK1+oVQW8oP=Lo1X8qac6dzg-fgGQ6R_F_psfokUEqe+a6w@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250823-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchtext/x-patch; charset=US-ASCII; name=v20250823-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From 7876f681e55f8208b8838d2a41c0670379f84dc1 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250823 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 4c01d21b2f3..a95c9a7b7c8 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1538,8 +1538,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1593,8 +1593,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1618,12 +1618,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1635,8 +1635,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1648,10 +1648,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2248,17 +2248,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2316,13 +2316,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 740cc910870..48104c22c4b 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10983,7 +10983,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..68ee5670124 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index a98c97f7616..629e2617f63 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250823-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250823-0001-Enhance-pg_get_sequence_data-function.patchDownload
From 04f4c97974a0cfde86e721a13cb22832f3819f14 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250823 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 48 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a3c8cff97b0..467947dce8b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 118d6da1ace..62bcd9d921c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250823-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250823-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From a543cb592151a424772b4cf945fefdd85c0fb954 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250823 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 122 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 565 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 792 insertions(+), 382 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 803c26ab216..1d5812f8583 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -82,7 +82,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -91,10 +92,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -842,17 +843,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -879,13 +886,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -918,7 +947,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -996,10 +1025,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1440,6 +1489,7 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1452,20 +1502,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -2019,19 +2076,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index db43034b9db..740cc910870 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10702,7 +10708,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10722,13 +10733,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10840,6 +10852,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19615,6 +19649,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index fc7a6639163..ad45e377add 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4477,6 +4477,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4507,9 +4508,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4525,6 +4531,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4545,6 +4552,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4596,52 +4605,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index dde85ed156c..75e52e2a1ac 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index e7a2d64f741..75f1e64eb02 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3354,6 +3354,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7a06af48842..6c8563fa4a4 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 8b10f2313f3..10f836156aa 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3582,11 +3582,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 53268059142..c7c8b9e1262 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  WITH clause parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +336,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +354,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +386,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +402,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +421,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +432,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +481,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +599,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +894,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1087,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1298,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1341,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1415,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1424,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1437,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1466,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1492,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1563,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1574,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1595,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1619,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1630,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1641,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1652,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1683,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1695,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1777,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1798,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1932,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1954,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index deddf0da844..d77bbc973f1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e4a9ec65ab4..80edb1e8a74 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2351,6 +2351,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250823-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250823-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From d031020c23a5ac494935942cd461a137669d7175 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250823 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 244acf52f36..60ce2016c7e 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -504,13 +504,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index cd0e19176fd..d12414cbabc 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index d3356bc84ee..3c777363243 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 22ad9051db3..87a7c7e79da 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1215,7 +1215,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1337,7 +1337,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1393,7 +1393,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1459,7 +1459,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1594,7 +1594,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2436,7 +2436,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4083,7 +4083,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5562,7 +5562,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ad45e377add..7195e28a40f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5247,7 +5247,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5305,7 +5305,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f458447a0e5..9a223b8076a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 7c0204dd6f4..7920908395d 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -248,6 +248,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -271,9 +273,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 80edb1e8a74..158ff805c2e 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2915,7 +2915,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250823-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250823-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 0608bb4b55cbf4aa1192bf9bcecc83a3e4883bc9 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:25:30 +0530
Subject: [PATCH v20250823 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++++-
 src/backend/catalog/pg_subscription.c       |  60 ++++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 243 +++++++++++++-------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 15 files changed, 320 insertions(+), 102 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..dc46d24c05d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 60ce2016c7e..28622e54aaa 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -478,7 +478,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -515,7 +517,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -527,8 +530,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -540,12 +557,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -554,6 +581,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -576,9 +606,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 1b3c5a55882..0b05b879ca8 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index a95c9a7b7c8..6413fbe0d5a 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -105,7 +106,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -715,6 +716,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -726,9 +733,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -743,6 +747,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -755,25 +763,27 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(RangeVar, rv, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
 
-				AddSubscriptionRelState(subid, relid, table_state,
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -781,6 +791,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -804,7 +819,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -848,13 +863,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrel_names = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -862,7 +876,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -884,17 +899,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrel_names = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -912,17 +927,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
 		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
@@ -931,12 +941,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		{
 			RangeVar   *rv = (RangeVar *) lfirst(lc);
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
@@ -947,19 +958,19 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
 		qsort(pubrel_local_oids, list_length(pubrel_names),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
@@ -969,6 +980,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -990,41 +1002,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1033,10 +1059,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1050,11 +1076,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1066,6 +1094,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1658,6 +1710,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1933,7 +1997,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2329,11 +2393,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2503,8 +2571,8 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2512,7 +2580,7 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
@@ -2528,8 +2596,25 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
-	if (server_version >= 160000)
+	/* Get the list of tables and sequences from the publisher. */
+	if (server_version >= 190000)
+	{
+		tableRow[2] = INT2VECTOROID;
+
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+						 "       FROM pg_class c\n"
+						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
+						 "                FROM pg_publication\n"
+						 "                WHERE pubname IN (%s)) AS gpt\n"
+						 "             ON gpt.relid = c.oid\n"
+						 "      UNION ALL\n"
+						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs\n"
+						 "       FROM pg_catalog.pg_publication_sequences s\n"
+						 "        WHERE s.pubname IN (%s)",
+						 pub_names->data, pub_names->data);
+	}
+	else if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index da0cbf41d6f..aa2ba72a708 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 48104c22c4b..a359f3e293e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10988,6 +10988,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3c777363243..a2ba0cef007 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 10f836156aa..c3e7cbcba3f 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62bcd9d921c..4660e42d775 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9a223b8076a..87fd96e0ff5 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,6 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 68ee5670124..8d8487c2454 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,6 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
-- 
2.43.0

v20250823-0007-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250823-0007-Documentation-for-sequence-synchronization.patchDownload
From 3ff0b5dd84691a5b56244a6c8ddf6f153309f1f6 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250823 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  29 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 263 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 466 insertions(+), 64 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index da8a7882580..2e0bedf9c6f 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8161,16 +8161,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8204,7 +8207,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8213,12 +8216,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a4b3e55ba5..8617ce2d806 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 0ac29928f17..477ea77a053 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,223 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, an ERROR is logged listing the missing sequences before the
+    process exits. The apply worker detects this failure and repeatedly
+    respawns the sequence synchronization worker to continue the
+    synchronization process until the sequences are either recreated on
+    the publisher, dropped on the subscriber, or removed from the
+    synchronization list. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this issue, either recreate the missing sequence on the
+    publisher using <link linkend="sql-createsequence"><command>CREATE SEQUENCE</command></link>,
+    drop the sequences on the subscriber if they are no longer needed using
+    <link linkend="sql-dropsequence"><command>DROP SEQUENCE</command></link>,
+    or run <link linkend="sql-altersubscription-params-refresh-publication">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> to
+    remove these sequences from synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2309,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2645,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2659,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index d48cdc76bd3..1ed668caf0f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 247c5bd2604..10a67288b39 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250823-0006-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250823-0006-New-worker-for-sequence-synchronization-du.patchDownload
From 830b46cfd7e9a8928813e42fd408776a2ab7238c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 15:14:25 +0530
Subject: [PATCH v20250823 6/7] New worker for sequence synchronization during 
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 630 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 124 +++-
 src/backend/replication/logical/tablesync.c   |  88 +--
 src/backend/replication/logical/worker.c      |  69 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |   9 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 26 files changed, 1262 insertions(+), 165 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 28622e54aaa..383f01f83e2 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -352,7 +352,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 0b05b879ca8..f4da3bc54b6 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 467947dce8b..d07b718fd83 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 6413fbe0d5a..ec5268954f0 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1017,7 +1017,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -1974,7 +1974,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 37377f7eb63..60fb14861ab 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -243,19 +243,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -265,7 +264,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -326,6 +325,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -414,7 +414,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -503,8 +504,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -627,13 +636,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -700,7 +709,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -832,6 +841,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -880,7 +908,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1246,7 +1274,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1547,7 +1575,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1587,6 +1615,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..356722b5b49
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,630 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It reports sequences that are missing on the publisher, as well as sequences
+ * that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo missing_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s).",
+						 missing_seqs->data);
+		appendStringInfoString(combined_error_hint, "For missing sequences, use ALTER SUBSCRIPTION with either REFRESH PUBLICATION or REFRESH PUBLICATION SEQUENCES.");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (missing_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, List *sequences_to_copy, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	bool		run_as_owner = MySubscription->runasowner;
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			int64		last_value;
+			bool		is_called;
+			int64		log_cnt;
+			XLogRecPtr	page_lsn;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Relation	sequence_rel;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+			tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+			{
+				elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+					 nspname, seqname);
+
+				batch_skipped_count++;
+				continue;
+			}
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				UserContext ucxt;
+
+				/*
+				 * Make sure that the copy command runs as the sequence owner,
+				 * unless the user has opted out of that behaviour.
+				 */
+				if (!run_as_owner)
+					SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+				SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+
+				if (!run_as_owner)
+					RestoreUserContext(&ucxt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn, false);
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+										MySubscription->name,
+										nspname,
+										seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 nspname, seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, NoLock);
+		}
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count)));
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_skipped_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+				}
+			}
+		}
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Raise an error if any sequences are missing on the remote server, or if
+	 * the local and remote sequence parameters do not match.
+	 */
+	if (missing_seqs->len || mismatched_seqs->len)
+		report_error_sequences(missing_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	List	   *sequences_to_copy = NIL;
+	StringInfoData app_name;
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subrel->srrelid;
+		seq_info->remote_seq_fetched = false;
+		seq_info->seqowner = sequence_rel->rd_rel->relowner;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, sequences_to_copy, subid);
+
+	foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+	{
+		pfree(seq_info->seqname);
+		pfree(seq_info->nspname);
+
+		sequences_to_copy = foreach_delete_current(sequences_to_copy, seq_info);
+	}
+
+	list_free(sequences_to_copy);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..2e7e710cf55 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	logicalrep_reset_seqsync_start_time();
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +102,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +151,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +164,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +191,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +219,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +228,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +241,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +270,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index a2ba0cef007..2a820182a1a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 87a7c7e79da..792be54f0e8 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -674,6 +674,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1214,7 +1219,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1336,7 +1344,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1392,7 +1403,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1458,7 +1472,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1593,7 +1610,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2435,7 +2455,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4082,7 +4105,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5262,7 +5288,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5382,8 +5409,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5493,6 +5520,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5512,14 +5543,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5590,6 +5623,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5602,9 +5639,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..80181825240 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3356,7 +3356,7 @@ struct config_int ConfigureNamesInt[] =
 		{"max_sync_workers_per_subscription",
 			PGC_SIGHUP,
 			REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL,
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4660e42d775..dce0c2ce108 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 87fd96e0ff5..5e0a5a989c2 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,15 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+	Oid			seqowner;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 7920908395d..b43d6c61ee6 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -103,6 +104,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -252,6 +255,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -260,12 +264,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -275,11 +283,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -344,15 +353,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..4469f0a8644
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 158ff805c2e..398c0d121cc 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

#310Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#307)
Re: Logical Replication of sequences

On Wed, Aug 20, 2025 at 9:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Aug 20, 2025 at 11:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Tue, Aug 19, 2025 at 9:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

If so, I don't think we can do much with the design
choice we made. During DDL replication of sequences, we need to
consider it as a conflict.

BTW, note that the same situation can happen even when the user
manually changed the sequence value on the subscriber in some way. So,
we can't prevent that.

Yes, I understand that conflicts can occur when users manually modify
sequence values or parameters on the subscriber. However, in Vignesh's
example, users are only executing the REFRESH command, without
performing any ALTER SEQUENCE commands or setval() operations on the
subscriber. In this scenario, I don't see why conflicts would arise
even with DDL replication in place.

This is because DDL can also fail if the existing sequence data does
not adhere to the DDL change. This will be true even for tables, but
let's focus on the sequence case. See below part of the example:

-- Subscriber
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
SELECT currval('s1');
currval
---------
14

-- Now on the publisher:
SELECT setval('s1', 11);
ALTER SEQUENCE s1 MAXVALUE 12;

When applying the DDL change on the subscriber:
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

Here the user has intentionally reduced the existing value of the
sequence to (11) on the publisher after the REFRESH command and then
performed a DDL that is compatible with the latest RESTART value (11).
Now, because we did REFRESH before the user set the value of sequence
as 11, the current value on the subscriber will be 14. When we
replicate the DDL, it will find the latest RESTART value as (14)
greater than DDL's changed MAXVALUE (12), so it will fail, and the
subscriber will retry. Users have to manually perform REFRESH once
again, or maybe as part of a conflict resolution strategy, we can do
this internally. IIUC, we can't avoid this even if we start writing
WAL for the REFRESH command on the publisher.

Right. Since DMLs and DDLs for sequences are replicated and applied to
the subscriber out of order even if we write WAL for the REFRESH
command.

On the other hand, there is a scenario where we can cover with the
idea of writing a WAL for the REFRESH command:

-- Publisher
CREATE s as integer;
select setval('s', pow(2,31)::int)

-- Subscriber
CREATE s as integer;
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
-- the last value of 's' is 1073741824

-- Publisher
alter sequence s as bigint;
select setval('s', pow(2,50)::bigint);

-- Subscriber
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
-- sequence synchronization keeps failing due to mismatch sequence
definition until ALTER SEQUENCE DDL is applied to the subscriber.

I'm not suggesting to change the current approach but I'd just like to
figure out how sequence replication will work with future DDL
replication if we implement sequence synchronization as a logical
replication feature.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#311Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#310)
Re: Logical Replication of sequences

On Thu, Aug 21, 2025 at 10:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Aug 20, 2025 at 9:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Aug 20, 2025 at 11:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Tue, Aug 19, 2025 at 9:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

If so, I don't think we can do much with the design
choice we made. During DDL replication of sequences, we need to
consider it as a conflict.

BTW, note that the same situation can happen even when the user
manually changed the sequence value on the subscriber in some way. So,
we can't prevent that.

Yes, I understand that conflicts can occur when users manually modify
sequence values or parameters on the subscriber. However, in Vignesh's
example, users are only executing the REFRESH command, without
performing any ALTER SEQUENCE commands or setval() operations on the
subscriber. In this scenario, I don't see why conflicts would arise
even with DDL replication in place.

This is because DDL can also fail if the existing sequence data does
not adhere to the DDL change. This will be true even for tables, but
let's focus on the sequence case. See below part of the example:

-- Subscriber
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
SELECT currval('s1');
currval
---------
14

-- Now on the publisher:
SELECT setval('s1', 11);
ALTER SEQUENCE s1 MAXVALUE 12;

When applying the DDL change on the subscriber:
ERROR: RESTART value (14) cannot be greater than MAXVALUE (12)

Here the user has intentionally reduced the existing value of the
sequence to (11) on the publisher after the REFRESH command and then
performed a DDL that is compatible with the latest RESTART value (11).
Now, because we did REFRESH before the user set the value of sequence
as 11, the current value on the subscriber will be 14. When we
replicate the DDL, it will find the latest RESTART value as (14)
greater than DDL's changed MAXVALUE (12), so it will fail, and the
subscriber will retry. Users have to manually perform REFRESH once
again, or maybe as part of a conflict resolution strategy, we can do
this internally. IIUC, we can't avoid this even if we start writing
WAL for the REFRESH command on the publisher.

Right. Since DMLs and DDLs for sequences are replicated and applied to
the subscriber out of order even if we write WAL for the REFRESH
command.

On the other hand, there is a scenario where we can cover with the
idea of writing a WAL for the REFRESH command:

-- Publisher
CREATE s as integer;
select setval('s', pow(2,31)::int)

-- Subscriber
CREATE s as integer;
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
-- the last value of 's' is 1073741824

-- Publisher
alter sequence s as bigint;
select setval('s', pow(2,50)::bigint);

-- Subscriber
ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
-- sequence synchronization keeps failing due to mismatch sequence
definition until ALTER SEQUENCE DDL is applied to the subscriber.

I'm not suggesting to change the current approach but I'd just like to
figure out how sequence replication will work with future DDL
replication if we implement sequence synchronization as a logical
replication feature.

I think we can have a conflict handler for
sequence_definition_mismatch where either it LOGs such that the user
needs to retry the operation after some time, or let it automatically
wait and retry, or a combination of both. As we are already working on
conflict handling (conflict detection, storage, and resolution), we
will at least have a way to store and let users be aware of such a
conflict, but in the best case, we will have conflict resolution as
well by the time replication of DDL sequence will be in a position to
land. Do you have better ideas?

BTW, do you have any suggestions on the first two design points raised
by me in email [1]/messages/by-id/CAA4eK1+oVQW8oP=Lo1X8qac6dzg-fgGQ6R_F_psfokUEqe+a6w@mail.gmail.com?

[1]: /messages/by-id/CAA4eK1+oVQW8oP=Lo1X8qac6dzg-fgGQ6R_F_psfokUEqe+a6w@mail.gmail.com

--
With Regards,
Amit Kapila.

#312shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#309)
Re: Logical Replication of sequences

On Thu, Aug 21, 2025 at 10:08 PM vignesh C <vignesh21@gmail.com> wrote:

I have also addressed all the comments from [1] in the attached
v20250823 version patch.
[1] - /messages/by-id/CAA4eK1+oVQW8oP=Lo1X8qac6dzg-fgGQ6R_F_psfokUEqe+a6w@mail.gmail.com

Thank You for the patches. I see a race condition between alter-seq and refresh.

Say we have triggered REFRESH on sub, and when seq-sync worker is in
copy_sequences() where it has retrieved the local sequence using
seqname while it has not locked the sequence-relation yet, if
meanwhile we alter sequence and change its name, seq-sync worker ends
up syncing that renamed sequence values with old-fetched sequence.
Steps:

1) create a sequence seq0 on pub and sub
2) do REFRESH PUBLICATION SEQ on sub
3) In seq-sync worker, during copy_sequences() hold debugger at:
seqinfo->remote_seq_fetched = true;
4) rename sequence on sub : ALTER SEQUENCE seq0 RENAME TO seq1;
5) release debugger in seq-sync worker. It will end up syncing seq1
using seq0 fetched from pub.

thanks
Shveta

#313Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: shveta malik (#312)
RE: Logical Replication of sequences

Dear Shveta,

Say we have triggered REFRESH on sub, and when seq-sync worker is in
copy_sequences() where it has retrieved the local sequence using
seqname while it has not locked the sequence-relation yet, if
meanwhile we alter sequence and change its name, seq-sync worker ends
up syncing that renamed sequence values with old-fetched sequence.
Steps:

Personally not sure it should be fixed. IIUC, this could happen because the
sequence sync worker does not handle everything in the one transaction. However,
the transaction would be quite longer if we modify that.

From another perspective... assuming that the sequencesync worker has lock
during the synchronization. In your workload, the ALTER SEQUENCE command would
be delayed till the synchronization is done. In the end, the seq0 is synched
with the pub's seq0 then renamed to seq1 - eventually there are the same result.
Can you clarify if there are other problematic workloads?

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#314Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#309)
1 attachment(s)
RE: Logical Replication of sequences

Dear Vignesh,

Thanks for updating the patch. Below are my comments.

01.
```
/* Relation is either a sequence or a table */
relkind = get_rel_relkind(subrel->srrelid);
if (relkind != RELKIND_SEQUENCE)
continue;
```

Can you update the comment to "Skip if the relation is not a sequence"?
The current one seems not related with what we do.

02.
```
appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
```

I'm wondering what is the good application_name. One idea is to follow the
tablesync worker: "pg_%u_sequence_sync_%u_%llu", another one is to use the same
name as bgwoker: "logical replication sequencesync worker for subscription %u".
Your name is also valid but it looks quite different with other processes.
Do others have any opinions?

03.
```
test_pub=# SELECT NEXTVAL('s1');
```
I feel no need to be capital.

04.
```
<link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
<link linkend="sql-altersubscription-params-refresh-publication">
<command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
<link linkend="sql-altersubscription-params-refresh-publication-sequences">
<command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
```

IIUC, we can use "A, B, or C" style.

05.
```
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
```

Same as above.

06.
```
<para>
State codes for sequences:
<literal>i</literal> = initialize,
<literal>r</literal> = ready
</para></entry>
```

Now the attribute can be "d".

07.

I feel we should add notes for subscription options. e.g., binary option
is no-op for sequence sync.

08.
```
/* Get the list of tables and sequences from the publisher. */
if (server_version >= 190000)
{
```

Not sure which is better, but I considered a way to append the string atop v16.
Please see attached. How do you feel?

09.
fetch_relation_list() still has some "tables", which should be "relations".

10.
```
if (sequencesync_worker)
{
/* Now safe to release the LWLock */
LWLockRelease(LogicalRepWorkerLock);
return;
}
```

Not sure the comment is needed because the lock is acquired just above. Instead
we can describe that sequence sync worker exists up to one per a subscription.

11.
Currently copy_sequences() does not check privilege within the function, it is
done in SetSequence(). Basically it works well, but if the user does not have
enough privilege, for one of the sequence in the batch, ERROR would be
raised - all sequences cannot be synched. How about detecting it in the loop
and skip synchronizing? This allows other sequences to be synced.

12.
```
/*
* This is a clean exit of the sequencesync worker; reset the
* last_seqsync_start_time.
*/
logicalrep_reset_seqsync_start_time();
```

ISTM the function can be used both sequence and table sync worker.

13.
```
/* Fetch tables and sequences that are in non-ready state. */
rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
true);
```

IIUC no need to check sequences if has_pending_sequences is NULL.

Best regards,
Hayato Kuroda
FUJITSU LIMITED

Attachments:

fetch_relation_list.diffsapplication/octet-stream; name=fetch_relation_list.diffsDownload
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index ec5268954f0..e71c0c62b5f 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -2596,25 +2596,8 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables and sequences from the publisher. */
-	if (server_version >= 190000)
-	{
-		tableRow[2] = INT2VECTOROID;
-
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
-						 "       FROM pg_class c\n"
-						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
-						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
-						 "                FROM pg_publication\n"
-						 "                WHERE pubname IN (%s)) AS gpt\n"
-						 "             ON gpt.relid = c.oid\n"
-						 "      UNION ALL\n"
-						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs\n"
-						 "       FROM pg_catalog.pg_publication_sequences s\n"
-						 "        WHERE s.pubname IN (%s)",
-						 pub_names->data, pub_names->data);
-	}
-	else if (server_version >= 160000)
+	/* Get the list of relations from the publisher */
+	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2639,6 +2622,19 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		if (server_version >= 190000)
+		{
+			/*
+			 * From version 19, we allowed to include sequences in the target.
+			 */
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
+		}
 	}
 	else
 	{
#315vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#312)
7 attachment(s)
Re: Logical Replication of sequences

On Tue, 26 Aug 2025 at 11:21, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Aug 21, 2025 at 10:08 PM vignesh C <vignesh21@gmail.com> wrote:

I have also addressed all the comments from [1] in the attached
v20250823 version patch.
[1] - /messages/by-id/CAA4eK1+oVQW8oP=Lo1X8qac6dzg-fgGQ6R_F_psfokUEqe+a6w@mail.gmail.com

Thank You for the patches. I see a race condition between alter-seq and refresh.

Say we have triggered REFRESH on sub, and when seq-sync worker is in
copy_sequences() where it has retrieved the local sequence using
seqname while it has not locked the sequence-relation yet, if
meanwhile we alter sequence and change its name, seq-sync worker ends
up syncing that renamed sequence values with old-fetched sequence.
Steps:

1) create a sequence seq0 on pub and sub
2) do REFRESH PUBLICATION SEQ on sub
3) In seq-sync worker, during copy_sequences() hold debugger at:
seqinfo->remote_seq_fetched = true;
4) rename sequence on sub : ALTER SEQUENCE seq0 RENAME TO seq1;
5) release debugger in seq-sync worker. It will end up syncing seq1
using seq0 fetched from pub.

Thanks for reporting this, this is handled in the attached version of patch.

Regards,
Vignesh

Attachments:

v20250901-0001-Enhance-pg_get_sequence_data-function.patchapplication/octet-stream; name=v20250901-0001-Enhance-pg_get_sequence_data-function.patchDownload
From 717a75d23fd38957381d4fd214e06fa06efe9206 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250901 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 48 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..879c62bdccc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 118d6da1ace..62bcd9d921c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250901-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250901-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From d4f892dd8921276b2aee594c4fb39a316ff315b0 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 25 Mar 2025 09:23:48 +0530
Subject: [PATCH v20250901 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 244acf52f36..60ce2016c7e 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -504,13 +504,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 31a92d1a24a..d3882b40a39 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index d3356bc84ee..3c777363243 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 22ad9051db3..87a7c7e79da 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1215,7 +1215,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1337,7 +1337,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1393,7 +1393,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1459,7 +1459,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1594,7 +1594,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2436,7 +2436,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4083,7 +4083,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5562,7 +5562,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ad45e377add..7195e28a40f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5247,7 +5247,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5305,7 +5305,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index f458447a0e5..9a223b8076a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 7c0204dd6f4..7920908395d 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -248,6 +248,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -271,9 +273,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 49af245ed8f..a7ff6601054 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2916,7 +2916,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250901-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchapplication/octet-stream; name=v20250901-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From dc6c7ee3842b7258895da4e37a388662651c6b63 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250901 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 0d74398faf3..9f895d53d8b 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1537,8 +1537,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1592,8 +1592,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1617,12 +1617,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1634,8 +1634,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1647,10 +1647,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2247,17 +2247,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2315,13 +2315,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 740cc910870..48104c22c4b 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10983,7 +10983,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..68ee5670124 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index a98c97f7616..629e2617f63 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250901-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250901-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From d9831b25c3ca656c54a92cbae832aa43aec75d93 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:25:30 +0530
Subject: [PATCH v20250901 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  60 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 332 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 405 insertions(+), 114 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..dc46d24c05d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 60ce2016c7e..28622e54aaa 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -478,7 +478,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -515,7 +517,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -527,8 +530,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -540,12 +557,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -554,6 +581,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -576,9 +606,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 1b3c5a55882..0b05b879ca8 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 9f895d53d8b..73ee478408a 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -104,7 +105,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -714,6 +715,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -725,9 +732,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -742,6 +746,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -754,25 +762,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
 
-				AddSubscriptionRelState(subid, relid, table_state,
+
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -780,6 +809,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -803,7 +837,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -847,13 +881,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -861,7 +894,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -883,17 +917,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -911,34 +945,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -946,28 +993,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -989,41 +1037,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1032,10 +1094,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1049,11 +1111,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1065,6 +1129,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1657,6 +1745,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1932,7 +2032,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2328,11 +2428,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2502,8 +2606,26 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	Assert(IsPointerList(list));
+
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2511,15 +2633,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
 	List	   *tablelist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2527,8 +2651,25 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
-	if (server_version >= 160000)
+	/* Get the list of tables and sequences from the publisher. */
+	if (server_version >= 190000)
+	{
+		tableRow[2] = INT2VECTOROID;
+
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
+						 "       FROM pg_class c\n"
+						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
+						 "                FROM pg_publication\n"
+						 "                WHERE pubname IN (%s)) AS gpt\n"
+						 "             ON gpt.relid = c.oid\n"
+						 "      UNION ALL\n"
+						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+						 "       FROM pg_catalog.pg_publication_sequences s\n"
+						 "        WHERE s.pubname IN (%s)",
+						 pub_names->data, pub_names->data);
+	}
+	else if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2557,7 +2698,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2570,7 +2711,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2586,22 +2727,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(tablelist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			tablelist = lappend(tablelist, relinfo);
 
 		ExecClearTuple(slot);
 	}
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 48104c22c4b..a359f3e293e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10988,6 +10988,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3c777363243..a2ba0cef007 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 10f836156aa..c3e7cbcba3f 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62bcd9d921c..4660e42d775 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9a223b8076a..660432341e6 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,6 +97,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 #endif							/* PG_SUBSCRIPTION_REL_H */
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 68ee5670124..8d8487c2454 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,6 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a7ff6601054..a3f02884404 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2899,6 +2899,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20250901-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250901-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From b183263200f6da691fabebd12ee5eadde15c590e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250901 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 122 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 565 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 792 insertions(+), 382 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 3de5687461c..bf823fb1cb2 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1439,6 +1488,7 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1451,20 +1501,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -2018,19 +2075,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index db43034b9db..740cc910870 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10702,7 +10708,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10722,13 +10733,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10840,6 +10852,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19615,6 +19649,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index fc7a6639163..ad45e377add 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4477,6 +4477,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4507,9 +4508,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4525,6 +4531,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4545,6 +4552,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4596,52 +4605,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index dde85ed156c..75e52e2a1ac 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index e7a2d64f741..75f1e64eb02 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3354,6 +3354,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 7a06af48842..6c8563fa4a4 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 8b10f2313f3..10f836156aa 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3582,11 +3582,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 53268059142..c7c8b9e1262 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  WITH clause parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +336,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +354,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +386,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +402,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +421,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +432,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +481,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +599,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +894,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1087,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1298,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1341,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1415,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1424,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1437,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1466,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1492,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1563,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1574,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1595,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1619,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1630,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1641,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1652,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1683,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1695,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1777,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1798,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1932,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1954,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index deddf0da844..d77bbc973f1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a13e8162890..49af245ed8f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250901-0006-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250901-0006-New-worker-for-sequence-synchronization-du.patchDownload
From 46e7abd7fa2527329564e65defc14e651cd09c6e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 28 Aug 2025 11:37:21 +0530
Subject: [PATCH v20250901 6/7] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 654 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 124 +++-
 src/backend/replication/logical/tablesync.c   |  88 +--
 src/backend/replication/logical/worker.c      |  69 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |  10 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 26 files changed, 1287 insertions(+), 165 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 28622e54aaa..383f01f83e2 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -352,7 +352,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 0b05b879ca8..f4da3bc54b6 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 879c62bdccc..265ce487c27 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 73ee478408a..372a98a6365 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1052,7 +1052,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2009,7 +2009,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 37377f7eb63..60fb14861ab 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -243,19 +243,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -265,7 +264,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -326,6 +325,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -414,7 +414,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -503,8 +504,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -627,13 +636,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -700,7 +709,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -832,6 +841,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -880,7 +908,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1246,7 +1274,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1547,7 +1575,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1587,6 +1615,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..667e69fd214
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,654 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+static List *sequences_to_copy = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		/* Now safe to release the LWLock */
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	bool		run_as_owner = MySubscription->runasowner;
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy,
+																   current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			int64		last_value;
+			bool		is_called;
+			int64		log_cnt;
+			XLogRecPtr	page_lsn;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Relation	sequence_rel;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			sequence_rel = try_table_open(seqinfo->localrelid,
+										  RowExclusiveLock);
+			tup = SearchSysCache1(SEQRELID,
+								  ObjectIdGetDatum(seqinfo->localrelid));
+			if (!HeapTupleIsValid(tup))
+			{
+				elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+					 nspname, seqname);
+
+				batch_skipped_count++;
+				continue;
+			}
+
+			/* Skip the invalidated entry */
+			if (!seqinfo->entry_valid)
+			{
+				ReleaseSysCache(tup);
+				table_close(sequence_rel, RowExclusiveLock);
+				batch_skipped_count++;
+
+				ereport(LOG,
+						errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							   seqinfo->nspname, seqinfo->seqname));
+				continue;
+			}
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				UserContext ucxt;
+
+				/*
+				 * Make sure that the copy command runs as the sequence owner,
+				 * unless the user has opted out of that behaviour.
+				 */
+				if (!run_as_owner)
+					SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+				SetSequence(seqinfo->localrelid, last_value, log_cnt,
+							is_called);
+
+				if (!run_as_owner)
+					RestoreUserContext(&ucxt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn, false);
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+										MySubscription->name,
+										nspname,
+										seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 nspname, seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, NoLock);
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count)));
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_skipped_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+
+					RemoveSubscriptionRel(MySubscription->oid, seqinfo->localrelid);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+											seqinfo->nspname, seqinfo->seqname, MySubscription->name));
+				}
+			}
+		}
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences no longer present on publisher have been removed from sequence resynchronization: (%s)",
+								missing_seqs->data));
+
+	/*
+	 * Raise an error if the local and remote sequence parameters do not
+	 * match.
+	 */
+	if (mismatched_seqs->len)
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+				errdetail("Mismatched sequence(s) on subscriber: (%s).", mismatched_seqs->data),
+				errhint("Alter or re-create local sequences to have matching parameters as publishers"));
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	/* Quick exit if no sequence is listed yet */
+	if (list_length(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+		{
+			if (seq_info->localrelid == reloid)
+			{
+				seq_info->entry_valid = false;
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all cache entries */
+		foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+			seq_info->entry_valid = false;
+	}
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subrel->srrelid;
+		seq_info->remote_seq_fetched = false;
+		seq_info->seqowner = sequence_rel->rd_rel->relowner;
+		seq_info->entry_valid = true;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+	foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+	{
+		pfree(seq_info->seqname);
+		pfree(seq_info->nspname);
+
+		sequences_to_copy = foreach_delete_current(sequences_to_copy, seq_info);
+	}
+
+	list_free(sequences_to_copy);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..2e7e710cf55 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	logicalrep_reset_seqsync_start_time();
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +102,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +151,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +164,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +191,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +219,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +228,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +241,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +270,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index a2ba0cef007..2a820182a1a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 87a7c7e79da..792be54f0e8 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -674,6 +674,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1214,7 +1219,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1336,7 +1344,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1392,7 +1403,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1458,7 +1472,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1593,7 +1610,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2435,7 +2455,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4082,7 +4105,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5262,7 +5288,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5382,8 +5409,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5493,6 +5520,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5512,14 +5543,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5590,6 +5623,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5602,9 +5639,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index f137129209f..692414f959d 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3420,7 +3420,7 @@ struct config_int ConfigureNamesInt[] =
 
 	{
 		{"max_sync_workers_per_subscription", PGC_SIGHUP, REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4660e42d775..dce0c2ce108 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 660432341e6..6f42c62674b 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,16 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 202bd2d5ace..4bc05518c3a 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 7920908395d..b43d6c61ee6 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -103,6 +104,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -252,6 +255,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -260,12 +264,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -275,11 +283,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -344,15 +353,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..fbcd0e9927d
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences no longer present on publisher have been removed from sequence resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a3f02884404..d0c4818db4b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250901-0007-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250901-0007-Documentation-for-sequence-synchronization.patchDownload
From d7f35c617c089606c70a88bbbe1772b2f5c9d986 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 22 May 2025 20:09:11 +0530
Subject: [PATCH v20250901 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 249 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 +++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 +++++--
 doc/src/sgml/ref/create_subscription.sgml |   6 +
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 453 insertions(+), 64 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index da8a7882580..4dc2bd99891 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8161,16 +8161,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8204,7 +8207,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8213,12 +8216,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a4b3e55ba5..8617ce2d806 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..4477f664331 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT NEXTVAL('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index d48cdc76bd3..1ed668caf0f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index 247c5bd2604..10a67288b39 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#316vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#314)
7 attachment(s)
Re: Logical Replication of sequences

On Thu, 28 Aug 2025 at 17:08, Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Dear Vignesh,

Thanks for updating the patch. Below are my comments.

01.
```
/* Relation is either a sequence or a table */
relkind = get_rel_relkind(subrel->srrelid);
if (relkind != RELKIND_SEQUENCE)
continue;
```

Can you update the comment to "Skip if the relation is not a sequence"?
The current one seems not related with what we do.

Modified

02.
```
appendStringInfo(&app_name, "%s_%s", MySubscription->name, "sequencesync worker");
```

I'm wondering what is the good application_name. One idea is to follow the
tablesync worker: "pg_%u_sequence_sync_%u_%llu", another one is to use the same
name as bgwoker: "logical replication sequencesync worker for subscription %u".
Your name is also valid but it looks quite different with other processes.
Do others have any opinions?

Modified

03.
```
test_pub=# SELECT NEXTVAL('s1');
```
I feel no need to be capital.

Modified

04.
```
<link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
<link linkend="sql-altersubscription-params-refresh-publication">
<command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> or
<link linkend="sql-altersubscription-params-refresh-publication-sequences">
<command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
```

IIUC, we can use "A, B, or C" style.

Modified

05.
```
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table synchronization worker or
+        sequence synchronization worker will be respawned.
```

Same as above.

Modified

06.
```
<para>
State codes for sequences:
<literal>i</literal> = initialize,
<literal>r</literal> = ready
</para></entry>
```

Now the attribute can be "d".

Modified

07.

I feel we should add notes for subscription options. e.g., binary option
is no-op for sequence sync.

Modified

08.
```
/* Get the list of tables and sequences from the publisher. */
if (server_version >= 190000)
{
```

Not sure which is better, but I considered a way to append the string atop v16.
Please see attached. How do you feel?

Modified

09.
fetch_relation_list() still has some "tables", which should be "relations".

Modified

10.
```
if (sequencesync_worker)
{
/* Now safe to release the LWLock */
LWLockRelease(LogicalRepWorkerLock);
return;
}
```

Not sure the comment is needed because the lock is acquired just above. Instead
we can describe that sequence sync worker exists up to one per a subscription.

Removed this comment. I felt no need to add any additional comments here.

11.
Currently copy_sequences() does not check privilege within the function, it is
done in SetSequence(). Basically it works well, but if the user does not have
enough privilege, for one of the sequence in the batch, ERROR would be
raised - all sequences cannot be synched. How about detecting it in the loop
and skip synchronizing? This allows other sequences to be synced.

Currently we have missing sequences and mismatched sequences being
printed at the end of the batch. We print the skip sequence because it
is dropped concurrently or altered concurrently in the loop. Should we
include this also in the loop itself? Based on it, I will change in
next version.

12.
```
/*
* This is a clean exit of the sequencesync worker; reset the
* last_seqsync_start_time.
*/
logicalrep_reset_seqsync_start_time();
```

ISTM the function can be used both sequence and table sync worker.

It is not required for table sync worker, for table sync worker it is
maintained in last_start_times hash table.

13.
```
/* Fetch tables and sequences that are in non-ready state. */
rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
true);
```

IIUC no need to check sequences if has_pending_sequences is NULL.

I feel it is required, this is required to determine if a sequence
sync worker should be started or not.

Thanks for the comments, the attached patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250902-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250902-0001-Enhance-pg_get_sequence_data-function.patchDownload
From 3b15dc341b513d200c9f0cd14ba5edf328cfb632 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250902 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 48 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..879c62bdccc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 118d6da1ace..62bcd9d921c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250902-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchtext/x-patch; charset=US-ASCII; name=v20250902-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From 23aee35e29fa5f1f9e3af7cafc063b5ff5e3b319 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250902 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 82cf65fae73..5c757776afc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1602,8 +1602,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1657,8 +1657,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1682,12 +1682,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1699,8 +1699,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1712,10 +1712,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2312,17 +2312,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2380,13 +2380,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 740cc910870..48104c22c4b 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10983,7 +10983,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..68ee5670124 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250902-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250902-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 2bf66a1f64db9b81729a7af70064865758199019 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250902 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 122 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 565 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 792 insertions(+), 382 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 3de5687461c..bf823fb1cb2 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1439,6 +1488,7 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1451,20 +1501,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -2018,19 +2075,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index db43034b9db..740cc910870 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10702,7 +10708,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10722,13 +10733,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10840,6 +10852,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19615,6 +19649,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index bea793456f9..7522efe02e4 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4477,6 +4477,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4507,9 +4508,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4525,6 +4531,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4545,6 +4552,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4596,52 +4605,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index e7a2d64f741..75f1e64eb02 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3354,6 +3354,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36b5b2457f9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6b20a4404b2..ec559146640 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3583,11 +3583,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 53268059142..c7c8b9e1262 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  WITH clause parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +336,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +354,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +386,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +402,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +421,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +432,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +481,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +599,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +894,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1087,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1298,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1341,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1415,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1424,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1437,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1466,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1492,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1563,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1574,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1595,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1619,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1630,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1641,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1652,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1683,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1695,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1777,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1798,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1932,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1954,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index deddf0da844..d77bbc973f1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a13e8162890..49af245ed8f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250902-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250902-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 06ba9ee59f42f157df199b8f5510c07b841a31eb Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:45:14 +0530
Subject: [PATCH v20250902 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  60 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 329 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 402 insertions(+), 114 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..dc46d24c05d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..5a8275d49ba 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +559,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c77fa0234bb..01d300d3cf4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 5c757776afc..344dfa8e894 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
 
-				AddSubscriptionRelState(subid, relid, table_state,
+
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -869,13 +903,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -883,7 +916,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -905,17 +939,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -933,34 +967,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -968,28 +1015,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1011,41 +1059,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1054,10 +1116,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1071,11 +1133,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1087,6 +1151,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1722,6 +1810,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1997,7 +2097,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2393,11 +2493,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2583,8 +2687,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2592,15 +2711,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
 	List	   *tablelist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2608,8 +2729,25 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
-	if (server_version >= 160000)
+	/* Get the list of tables and sequences from the publisher. */
+	if (server_version >= 190000)
+	{
+		tableRow[2] = INT2VECTOROID;
+
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
+						 "       FROM pg_class c\n"
+						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
+						 "                FROM pg_publication\n"
+						 "                WHERE pubname IN (%s)) AS gpt\n"
+						 "             ON gpt.relid = c.oid\n"
+						 "      UNION ALL\n"
+						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+						 "       FROM pg_catalog.pg_publication_sequences s\n"
+						 "        WHERE s.pubname IN (%s)",
+						 pub_names->data, pub_names->data);
+	}
+	else if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2638,7 +2776,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2651,7 +2789,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2667,22 +2805,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(tablelist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			tablelist = lappend(tablelist, relinfo);
 
 		ExecClearTuple(slot);
 	}
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 48104c22c4b..a359f3e293e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10988,6 +10988,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3c777363243..a2ba0cef007 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec559146640..4a638fbecc9 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62bcd9d921c..4660e42d775 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..3d6e31a0d6c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,9 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 68ee5670124..8d8487c2454 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,6 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a7ff6601054..a3f02884404 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2899,6 +2899,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20250902-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250902-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 5a6054d9edcea4d7ae881cdbbfc0501171b6a9ef Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:59:39 +0530
Subject: [PATCH v20250902 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 31a92d1a24a..d3882b40a39 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index d3356bc84ee..3c777363243 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f1ebd63e792..d1493f36e04 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1230,7 +1230,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1352,7 +1352,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1408,7 +1408,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1474,7 +1474,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1609,7 +1609,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2451,7 +2451,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4114,7 +4114,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5744,7 +5744,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 7522efe02e4..e815e1c73be 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5258,7 +5258,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5316,7 +5316,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 62ea1a00580..cfd0a223648 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -274,9 +276,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 49af245ed8f..a7ff6601054 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2916,7 +2916,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250902-0006-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250902-0006-New-worker-for-sequence-synchronization-du.patchDownload
From a4de42077b7b1354f1ed834fc5de294fa08e4db8 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 28 Aug 2025 11:37:21 +0530
Subject: [PATCH v20250902 6/7] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |  44 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 654 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 124 +++-
 src/backend/replication/logical/tablesync.c   |  88 +--
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |  10 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 +++++++
 src/tools/pgindent/typedefs.list              |   1 +
 26 files changed, 1304 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 5a8275d49ba..e67444b53d7 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 01d300d3cf4..9b6e3647cc4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 879c62bdccc..265ce487c27 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 344dfa8e894..bbab49ae9f1 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1074,7 +1074,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2074,7 +2074,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -2717,7 +2717,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	StringInfoData cmd;
 	TupleTableSlot *slot;
 	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
-	List	   *tablelist = NIL;
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
 	bool		check_relkind = (server_version >= 190000);
@@ -2729,25 +2729,8 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables and sequences from the publisher. */
-	if (server_version >= 190000)
-	{
-		tableRow[2] = INT2VECTOROID;
-
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
-						 "       FROM pg_class c\n"
-						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
-						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
-						 "                FROM pg_publication\n"
-						 "                WHERE pubname IN (%s)) AS gpt\n"
-						 "             ON gpt.relid = c.oid\n"
-						 "      UNION ALL\n"
-						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
-						 "       FROM pg_catalog.pg_publication_sequences s\n"
-						 "        WHERE s.pubname IN (%s)",
-						 pub_names->data, pub_names->data);
-	}
-	else if (server_version >= 160000)
+	/* Get the list of relations from the publisher */
+	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2764,7 +2747,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs,  c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2772,6 +2755,15 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2823,13 +2815,13 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		relinfo->relkind = relkind;
 
 		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
-			list_member_rangevar(tablelist, relinfo->rv))
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, relinfo);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2837,7 +2829,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index add2e2e066c..3add0aff35d 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1590,7 +1618,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1630,6 +1658,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..0721fed0061
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,654 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+static List *sequences_to_copy = NIL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * sequence_comparator
+ *
+ * Comparator function for sorting LogicalRepSequenceInfo objects in a list.
+ * It compares sequences first by namespace name and then by sequence name.
+ */
+static int
+sequence_comparator(const ListCell *s1, const ListCell *s2)
+{
+	int			cmp;
+	LogicalRepSequenceInfo *seqinfo1 = (LogicalRepSequenceInfo *) (s1->ptr_value);
+	LogicalRepSequenceInfo *seqinfo2 = (LogicalRepSequenceInfo *) (s2->ptr_value);
+
+	/* Compare by namespace name first */
+	cmp = strcmp(seqinfo1->nspname, seqinfo2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(seqinfo1->seqname, seqinfo2->seqname);
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value. Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = list_length(sequences_to_copy);
+	int			current_index = 0;
+	int			search_pos = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	bool		run_as_owner = MySubscription->runasowner;
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	/* Sort the list of sequences to optimize the search */
+	list_sort(sequences_to_copy, sequence_comparator);
+
+	/*
+	 * We batch synchronize multiple sequences per transaction, because the
+	 * alternative of synchronizing each sequence individually incurs overhead
+	 * of starting and committing transactions repeatedly. On the other hand,
+	 * we want to avoid keeping this batch transaction open for extended
+	 * periods so it is currently limited to 100 sequences per batch.
+	 */
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	while (current_index < total_seqs)
+	{
+#define REMOTE_SEQ_COL_COUNT 12
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		int			batch_size = Min(MAX_SEQUENCES_SYNC_PER_BATCH, total_seqs - current_index);
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/*
+		 * Prepare the string of current batch sequences to fetch from the
+		 * publisher.
+		 */
+		for (int i = 0; i < batch_size; i++)
+		{
+			LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy,
+																   current_index + i));
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", seqinfo->nspname,
+							 seqinfo->seqname);
+		}
+
+		initStringInfo(cmd);
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		destroyStringInfo(seqstr);
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int			col = 0;
+			bool		isnull;
+			char	   *nspname;
+			char	   *seqname;
+			int64		last_value;
+			bool		is_called;
+			int64		log_cnt;
+			XLogRecPtr	page_lsn;
+			Oid			seqtypid;
+			int64		seqstart;
+			int64		seqmin;
+			int64		seqmax;
+			int64		seqincrement;
+			bool		seqcycle;
+			HeapTuple	tup;
+			Relation	sequence_rel;
+			Form_pg_sequence seqform;
+			LogicalRepSequenceInfo *seqinfo = NULL;
+
+			CHECK_FOR_INTERRUPTS();
+
+			nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+			Assert(!isnull);
+
+			/* Sanity check */
+			Assert(col == REMOTE_SEQ_COL_COUNT);
+
+			/* Retrieve the sequence object fetched from the publisher */
+			while (search_pos < total_seqs)
+			{
+				LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));
+
+				if (!strcmp(candidate_seq->nspname, nspname) &&
+					!strcmp(candidate_seq->seqname, seqname))
+				{
+					seqinfo = candidate_seq;
+					search_pos++;
+					break;
+				}
+
+				search_pos++;
+			}
+
+			Assert(seqinfo);
+
+			seqinfo->remote_seq_fetched = true;
+
+			/* Get the local sequence */
+			sequence_rel = try_table_open(seqinfo->localrelid,
+										  RowExclusiveLock);
+			tup = SearchSysCache1(SEQRELID,
+								  ObjectIdGetDatum(seqinfo->localrelid));
+			if (!sequence_rel || !HeapTupleIsValid(tup))
+			{
+				elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+					 nspname, seqname);
+
+				batch_skipped_count++;
+				continue;
+			}
+
+			/* Skip the invalidated entry */
+			if (!seqinfo->entry_valid)
+			{
+				ReleaseSysCache(tup);
+				table_close(sequence_rel, RowExclusiveLock);
+				batch_skipped_count++;
+
+				ereport(LOG,
+						errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							   nspname, seqname));
+				continue;
+			}
+
+			seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+			/* Update the sequence only if the parameters are identical */
+			if (seqform->seqtypid == seqtypid &&
+				seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+				seqform->seqcycle == seqcycle &&
+				seqform->seqstart == seqstart &&
+				seqform->seqincrement == seqincrement)
+			{
+				UserContext ucxt;
+
+				/*
+				 * Make sure that the copy command runs as the sequence owner,
+				 * unless the user has opted out of that behaviour.
+				 */
+				if (!run_as_owner)
+					SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+				SetSequence(seqinfo->localrelid, last_value, log_cnt,
+							is_called);
+
+				if (!run_as_owner)
+					RestoreUserContext(&ucxt);
+
+				UpdateSubscriptionRelState(subid, seqinfo->localrelid,
+										   SUBREL_STATE_READY, page_lsn, false);
+				ereport(DEBUG1,
+						errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+										MySubscription->name,
+										nspname,
+										seqname));
+
+				batch_succeeded_count++;
+			}
+			else
+			{
+				if (mismatched_seqs->len)
+					appendStringInfoString(mismatched_seqs, ", ");
+
+				appendStringInfo(mismatched_seqs, "\"%s.%s\"",
+								 nspname, seqname);
+				batch_mismatched_count++;
+			}
+
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, NoLock);
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+
+		walrcv_clear_result(res);
+
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d missing",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count)));
+
+		/*
+		 * Sequence synchronization for this batch was incomplete because some
+		 * sequences are missing on the publisher. Identify the missing
+		 * sequences.
+		 */
+		if ((batch_succeeded_count + batch_skipped_count + batch_mismatched_count) < batch_size)
+		{
+			for (int i = 0; i < batch_size; i++)
+			{
+				LogicalRepSequenceInfo *seqinfo = lfirst(list_nth_cell(sequences_to_copy, current_index + i));
+
+				if (!seqinfo->remote_seq_fetched)
+				{
+					if (missing_seqs->len)
+						appendStringInfoString(missing_seqs, ", ");
+
+					appendStringInfo(missing_seqs, "\"%s.%s\"",
+									 seqinfo->nspname, seqinfo->seqname);
+
+					RemoveSubscriptionRel(MySubscription->oid, seqinfo->localrelid);
+					ereport(DEBUG1,
+							errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+											seqinfo->nspname, seqinfo->seqname, MySubscription->name));
+				}
+			}
+		}
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences no longer present on publisher have been removed from sequence resynchronization: (%s)",
+								missing_seqs->data));
+
+	/*
+	 * Raise an error if the local and remote sequence parameters do not
+	 * match.
+	 */
+	if (mismatched_seqs->len)
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+				errdetail("Mismatched sequence(s) on subscriber: (%s).", mismatched_seqs->data),
+				errhint("Alter or re-create local sequences to have matching parameters as publishers"));
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	/* Quick exit if no sequence is listed yet */
+	if (list_length(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+		{
+			if (seq_info->localrelid == reloid)
+			{
+				seq_info->entry_valid = false;
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all cache entries */
+		foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+			seq_info->entry_valid = false;
+	}
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSequenceInfo *seq_info;
+		char	   *nspname;
+		char	   *seqname;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		seqname = RelationGetRelationName(sequence_rel);
+		nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_info = (LogicalRepSequenceInfo *) palloc(sizeof(LogicalRepSequenceInfo));
+		seq_info->seqname = pstrdup(seqname);
+		seq_info->nspname = pstrdup(nspname);
+		seq_info->localrelid = subrel->srrelid;
+		seq_info->remote_seq_fetched = false;
+		seq_info->seqowner = sequence_rel->rd_rel->relowner;
+		seq_info->entry_valid = true;
+		sequences_to_copy = lappend(sequences_to_copy, seq_info);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+	foreach_ptr(LogicalRepSequenceInfo, seq_info, sequences_to_copy)
+	{
+		pfree(seq_info->seqname);
+		pfree(seq_info->nspname);
+
+		sequences_to_copy = foreach_delete_current(sequences_to_copy, seq_info);
+	}
+
+	list_free(sequences_to_copy);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..2e7e710cf55 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,15 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
 	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	logicalrep_reset_seqsync_start_time();
+
 	/* Stop gracefully */
 	proc_exit(0);
 }
@@ -89,7 +102,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +151,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +164,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +191,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +219,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +228,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +241,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +270,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index a2ba0cef007..2a820182a1a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d1493f36e04..6a946445e3b 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -689,6 +689,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1229,7 +1234,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1351,7 +1359,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1407,7 +1418,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1473,7 +1487,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1608,7 +1625,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2450,7 +2470,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3271,7 +3294,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 
 		SpinLockAcquire(&leader->relmutex);
 		oldestxmin = leader->oldest_nonremovable_xid;
@@ -4113,7 +4136,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5443,7 +5469,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5563,8 +5590,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5675,6 +5702,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5694,14 +5725,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5772,6 +5805,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5784,9 +5821,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index f137129209f..692414f959d 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3420,7 +3420,7 @@ struct config_int ConfigureNamesInt[] =
 
 	{
 		{"max_sync_workers_per_subscription", PGC_SIGHUP, REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4660e42d775..dce0c2ce108 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 3d6e31a0d6c..6101bdedda4 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,16 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_fetched;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f402b17295c..3a5fa8f8be1 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index cfd0a223648..bd668d308c2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -278,11 +286,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -347,15 +356,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..fbcd0e9927d
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences no longer present on publisher have been removed from sequence resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a3f02884404..d0c4818db4b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250902-0007-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250902-0007-Documentation-for-sequence-synchronization.patchDownload
From 56e08d1925a7f8de817ee58949e016954f25d52b Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:35:21 +0530
Subject: [PATCH v20250902 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 249 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 +++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 +++++--
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 462 insertions(+), 68 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e9095bedf21..72d597097a3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8186,16 +8186,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8229,7 +8232,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8238,12 +8241,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a4b3e55ba5..6054a81b923 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..06d29966e23 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..8309ca4b039 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index fc314437311..51d0b389be5 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#317Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#316)
RE: Logical Replication of sequences

Dear Vignesh,

Thanks for updating the patch. Few comments:
01.
```
/* Find the leader apply worker and signal it. */
logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
```

Sequencesync worker does not need to send a signal to the apply worker.
Should we skip in the case?
Per my understanding, the signal is being used to set the status to STATE_READY.

02.
```
if (worker)
worker->last_seqsync_start_time = 0;

LWLockRelease(LogicalRepWorkerLock);
```

I feel we can release LWLock first then update last_seqsync_start_time.

03.
Sequencesync worker cannot update its GUC parameters because ProcessConfigFile()
is not called. How about checking the signal at the end of batch loop?

04.
```
while (search_pos < total_seqs)
{
LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));

if (!strcmp(candidate_seq->nspname, nspname) &&
!strcmp(candidate_seq->seqname, seqname))
{
seqinfo = candidate_seq;
search_pos++;
break;
}

search_pos++;
}
```

It looks like that if the entry in sequences_to_copy is skipped, it won't be
referred anymore. I feel this is method is bit dangerous, because ordering of
the list may be different with the returned tuples from the publisher. Nodes may
use the different collations.

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#318vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#317)
7 attachment(s)
Re: Logical Replication of sequences

On Wed, 3 Sept 2025 at 13:04, Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Dear Vignesh,

Thanks for updating the patch. Few comments:
01.
```
/* Find the leader apply worker and signal it. */
logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
```

Sequencesync worker does not need to send a signal to the apply worker.
Should we skip in the case?
Per my understanding, the signal is being used to set the status to STATE_READY.

Modified

02.
```
if (worker)
worker->last_seqsync_start_time = 0;

LWLockRelease(LogicalRepWorkerLock);
```

I feel we can release LWLock first then update last_seqsync_start_time.

I felt it should be done within lock so that
ProcessSyncingSequencesForApply waits till the last_seqsync_start_time
is also set.

03.
Sequencesync worker cannot update its GUC parameters because ProcessConfigFile()
is not called. How about checking the signal at the end of batch loop?

Modified

04.
```
while (search_pos < total_seqs)
{
LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));

if (!strcmp(candidate_seq->nspname, nspname) &&
!strcmp(candidate_seq->seqname, seqname))
{
seqinfo = candidate_seq;
search_pos++;
break;
}

search_pos++;
}
```

It looks like that if the entry in sequences_to_copy is skipped, it won't be
referred anymore. I feel this is method is bit dangerous, because ordering of
the list may be different with the returned tuples from the publisher. Nodes may
use the different collations.

Modified

The attached patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250904-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250904-0001-Enhance-pg_get_sequence_data-function.patchDownload
From 0074e7169e9e1d712812944f299d089bc16b2c01 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250904 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 48 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..879c62bdccc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 118d6da1ace..62bcd9d921c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250904-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250904-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 6a5f48527cb9479cf4f399e403f93509d140e4e5 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250904 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 122 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 565 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 792 insertions(+), 382 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 3de5687461c..bf823fb1cb2 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1439,6 +1488,7 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1451,20 +1501,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -2018,19 +2075,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index db43034b9db..740cc910870 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10702,7 +10708,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10722,13 +10733,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10840,6 +10852,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19615,6 +19649,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index bea793456f9..7522efe02e4 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4477,6 +4477,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4507,9 +4508,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4525,6 +4531,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4545,6 +4552,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4596,52 +4605,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index e7a2d64f741..75f1e64eb02 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3354,6 +3354,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36b5b2457f9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6b20a4404b2..ec559146640 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3583,11 +3583,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 53268059142..c7c8b9e1262 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  WITH clause parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +336,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +354,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +386,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +402,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +421,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +432,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +481,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +599,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +894,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1087,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1298,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1341,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1415,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1424,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1437,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1466,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1492,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1563,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1574,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1595,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1619,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1630,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1641,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1652,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1683,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1695,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1777,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1798,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1932,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1954,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index deddf0da844..d77bbc973f1 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a13e8162890..49af245ed8f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250904-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250904-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 30c2f5cd0bd222c34bdeeec982a9ffd9c24a781d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:45:14 +0530
Subject: [PATCH v20250904 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  60 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 329 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 402 insertions(+), 114 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..dc46d24c05d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..5a8275d49ba 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +559,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c77fa0234bb..01d300d3cf4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 5c757776afc..344dfa8e894 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
 
-				AddSubscriptionRelState(subid, relid, table_state,
+
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -869,13 +903,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -883,7 +916,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -905,17 +939,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -933,34 +967,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -968,28 +1015,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1011,41 +1059,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1054,10 +1116,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1071,11 +1133,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1087,6 +1151,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1722,6 +1810,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1997,7 +2097,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2393,11 +2493,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2583,8 +2687,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2592,15 +2711,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
 	List	   *tablelist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2608,8 +2729,25 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
-	if (server_version >= 160000)
+	/* Get the list of tables and sequences from the publisher. */
+	if (server_version >= 190000)
+	{
+		tableRow[2] = INT2VECTOROID;
+
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
+						 "       FROM pg_class c\n"
+						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
+						 "                FROM pg_publication\n"
+						 "                WHERE pubname IN (%s)) AS gpt\n"
+						 "             ON gpt.relid = c.oid\n"
+						 "      UNION ALL\n"
+						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+						 "       FROM pg_catalog.pg_publication_sequences s\n"
+						 "        WHERE s.pubname IN (%s)",
+						 pub_names->data, pub_names->data);
+	}
+	else if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2638,7 +2776,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2651,7 +2789,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2667,22 +2805,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(tablelist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			tablelist = lappend(tablelist, relinfo);
 
 		ExecClearTuple(slot);
 	}
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 48104c22c4b..a359f3e293e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10988,6 +10988,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3c777363243..a2ba0cef007 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec559146640..4a638fbecc9 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62bcd9d921c..4660e42d775 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..3d6e31a0d6c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,9 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 68ee5670124..8d8487c2454 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,6 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a7ff6601054..a3f02884404 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2899,6 +2899,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20250904-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250904-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From c88e1c952d8234cfe599e20c202fb60c30172ad4 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:59:39 +0530
Subject: [PATCH v20250904 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 31a92d1a24a..d3882b40a39 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index d3356bc84ee..3c777363243 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f1ebd63e792..d1493f36e04 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1230,7 +1230,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1352,7 +1352,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1408,7 +1408,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1474,7 +1474,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1609,7 +1609,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2451,7 +2451,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4114,7 +4114,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5744,7 +5744,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 7522efe02e4..e815e1c73be 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5258,7 +5258,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5316,7 +5316,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 62ea1a00580..cfd0a223648 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -274,9 +276,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 49af245ed8f..a7ff6601054 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2916,7 +2916,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250904-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchtext/x-patch; charset=US-ASCII; name=v20250904-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From af7b39b6b0d57d3e8898e35478949b3b7d519959 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250904 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 82cf65fae73..5c757776afc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1602,8 +1602,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1657,8 +1657,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1682,12 +1682,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1699,8 +1699,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1712,10 +1712,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2312,17 +2312,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2380,13 +2380,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 740cc910870..48104c22c4b 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10983,7 +10983,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..68ee5670124 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250904-0006-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250904-0006-New-worker-for-sequence-synchronization-du.patchDownload
From b52b560ccb9da1052589ae2efa57068565d440d4 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 28 Aug 2025 11:37:21 +0530
Subject: [PATCH v20250904 6/7] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |  44 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 751 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   |  88 +-
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_tables.c           |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1411 insertions(+), 192 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 5a8275d49ba..e67444b53d7 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 01d300d3cf4..9b6e3647cc4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 879c62bdccc..265ce487c27 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 344dfa8e894..bbab49ae9f1 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1074,7 +1074,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2074,7 +2074,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -2717,7 +2717,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	StringInfoData cmd;
 	TupleTableSlot *slot;
 	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
-	List	   *tablelist = NIL;
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
 	bool		check_relkind = (server_version >= 190000);
@@ -2729,25 +2729,8 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables and sequences from the publisher. */
-	if (server_version >= 190000)
-	{
-		tableRow[2] = INT2VECTOROID;
-
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
-						 "       FROM pg_class c\n"
-						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
-						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
-						 "                FROM pg_publication\n"
-						 "                WHERE pubname IN (%s)) AS gpt\n"
-						 "             ON gpt.relid = c.oid\n"
-						 "      UNION ALL\n"
-						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
-						 "       FROM pg_catalog.pg_publication_sequences s\n"
-						 "        WHERE s.pubname IN (%s)",
-						 pub_names->data, pub_names->data);
-	}
-	else if (server_version >= 160000)
+	/* Get the list of relations from the publisher */
+	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2764,7 +2747,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs,  c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2772,6 +2755,15 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2823,13 +2815,13 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		relinfo->relkind = relkind;
 
 		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
-			list_member_rangevar(tablelist, relinfo->rv))
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, relinfo);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2837,7 +2829,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index add2e2e066c..3add0aff35d 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1590,7 +1618,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1630,6 +1658,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..0028bd52077
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,751 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficent permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	int64		log_cnt;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+#define REMOTE_SEQ_COL_COUNT 12
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	hash_seq_init(&status, sequences_to_copy);
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size >= MAX_SEQUENCES_SYNC_PER_BATCH)
+				break;
+		}
+
+		if (batch_size == 0)
+		{
+			CommitTransactionCommand();
+			break;
+		}
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient pemission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	StartTransactionCommand();
+
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+
+		/* Clean up local catalog to prevent retry */
+		RemoveSubscriptionRel(MySubscription->oid, entry->localrelid);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+								entry->nspname, entry->seqname, MySubscription->name));
+	}
+
+	CommitTransactionCommand();
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequence sync worker sequences",
+									256, &ctl, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key,
+								HASH_ENTER, &found);
+		Assert(seq_entry != NULL);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+	hash_seq_init(&hash_seq, sequences_to_copy);
+	while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+	{
+		pfree(seq_entry->seqname);
+		pfree(seq_entry->nspname);
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index a2ba0cef007..2a820182a1a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d1493f36e04..6a946445e3b 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -689,6 +689,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1229,7 +1234,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1351,7 +1359,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1407,7 +1418,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1473,7 +1487,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1608,7 +1625,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2450,7 +2470,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3271,7 +3294,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 
 		SpinLockAcquire(&leader->relmutex);
 		oldestxmin = leader->oldest_nonremovable_xid;
@@ -4113,7 +4136,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5443,7 +5469,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5563,8 +5590,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5675,6 +5702,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5694,14 +5725,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5772,6 +5805,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5784,9 +5821,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index f137129209f..692414f959d 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3420,7 +3420,7 @@ struct config_int ConfigureNamesInt[] =
 
 	{
 		{"max_sync_workers_per_subscription", PGC_SIGHUP, REPLICATION_SUBSCRIBERS,
-			gettext_noop("Maximum number of table synchronization workers per subscription."),
+			gettext_noop("Maximum number of workers per subscription for synchronizing tables and sequences."),
 			NULL
 		},
 		&max_sync_workers_per_subscription,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4660e42d775..dce0c2ce108 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 3d6e31a0d6c..4d20ebbaf4b 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f402b17295c..3a5fa8f8be1 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index cfd0a223648..bd668d308c2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -278,11 +286,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -347,15 +356,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..de47f39fdbb
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a3f02884404..505b3b6723f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250904-0007-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250904-0007-Documentation-for-sequence-synchronization.patchDownload
From bc07a389fc8a9a3b88c679c35187f0a7ed0339b8 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:35:21 +0530
Subject: [PATCH v20250904 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 249 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 +++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 +++++--
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 462 insertions(+), 68 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e9095bedf21..72d597097a3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8186,16 +8186,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8229,7 +8232,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8238,12 +8241,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a4b3e55ba5..6054a81b923 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..06d29966e23 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..8309ca4b039 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index fc314437311..51d0b389be5 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#319Masahiko Sawada
sawada.mshk@gmail.com
In reply to: vignesh C (#318)
Re: Logical Replication of sequences

On Thu, Sep 4, 2025 at 9:51 AM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 3 Sept 2025 at 13:04, Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Dear Vignesh,

Thanks for updating the patch. Few comments:
01.
```
/* Find the leader apply worker and signal it. */
logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
```

Sequencesync worker does not need to send a signal to the apply worker.
Should we skip in the case?
Per my understanding, the signal is being used to set the status to STATE_READY.

Modified

02.
```
if (worker)
worker->last_seqsync_start_time = 0;

LWLockRelease(LogicalRepWorkerLock);
```

I feel we can release LWLock first then update last_seqsync_start_time.

I felt it should be done within lock so that
ProcessSyncingSequencesForApply waits till the last_seqsync_start_time
is also set.

03.
Sequencesync worker cannot update its GUC parameters because ProcessConfigFile()
is not called. How about checking the signal at the end of batch loop?

Modified

04.
```
while (search_pos < total_seqs)
{
LogicalRepSequenceInfo *candidate_seq = lfirst(list_nth_cell(sequences_to_copy, search_pos));

if (!strcmp(candidate_seq->nspname, nspname) &&
!strcmp(candidate_seq->seqname, seqname))
{
seqinfo = candidate_seq;
search_pos++;
break;
}

search_pos++;
}
```

It looks like that if the entry in sequences_to_copy is skipped, it won't be
referred anymore. I feel this is method is bit dangerous, because ordering of
the list may be different with the returned tuples from the publisher. Nodes may
use the different collations.

Modified

The attached patch has the changes for the same.

Please rebase the patches as they conflict with current HEAD (due to
commit 6359989654).

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#320Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#305)
Re: Logical Replication of sequences

On Wed, Aug 20, 2025 at 4:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Aug 18, 2025 at 3:36 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, the updated version has the changes for the same.

I wanted to first discuss a few design points. The patch implements
"ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES" such that it
copies the existing sequences values and also adds/removes any missing
sequences. For the second part (add/remove sequences), we already have
a separate command "ALTER SUBSCRIPTION ... REFRESH PUBLICATION". So, I
feel the new command should only copy the sequence values, as that
will keep the interface easy to define and understand. Additionally,
it will help to simplify the code in the patch, especially in the
function AlterSubscription_refresh.

While I agree that the new command just copies the sequence values,
I'm not sure the command should be implemented as an extension of
ALTER SUBSCRIPTION ... REFRESH PUBLICATION command. Probably what the
new command does is quite different from what REFRESH PUBLICATION
command does?

We previously discussed *not* to launch an apply worker if the
corresponding publication(s) only publish sequences. See [1]. We
should consider it again to see if that is a good idea. It will have
some drawbacks as compared to the current approach of doing sync via
sync worker. The command could take time for a large number of
sequences, and on failure, retry won't happen which can happen with
background workers. Additionally, when the connect option is false for
a subscription during creation, the user needs to later call REFRESH
to sync the sequences after enabling the subscription. OTOH, doing the
sync during the command will bring more predictability and simplify
the patch. What do others think?

It seems okay to me that we launch an apply worker for a subscription
corresponding to sequence-only publications. I think the situation
seems somewhat similar to the case where we launch an apply worker
even for a subscription corresponding to empty publications. It would
be quite a rare case in practice where publications have only
sequences. I guess that it would rather simplify the patch if we can
cut the part of doing the sync during the command (i.e., not
distinguish between table-and-sequence publications and sequence-only
publications), no?

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#321vignesh C
vignesh21@gmail.com
In reply to: Masahiko Sawada (#319)
7 attachment(s)
Re: Logical Replication of sequences

On Fri, 5 Sept 2025 at 03:04, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Please rebase the patches as they conflict with current HEAD (due to
commit 6359989654).

Attached a rebased version of the patches.

Regards,
Vignesh

Attachments:

v20250905-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250905-0001-Enhance-pg_get_sequence_data-function.patchDownload
From c34be1acab29f38845fd5d3d0c065e6c6b43aa0c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250905 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 48 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..879c62bdccc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 118d6da1ace..62bcd9d921c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250905-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250905-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 520f75b95b8aacb6b8a4fcce9f15ef41243c6978 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:59:39 +0530
Subject: [PATCH v20250905 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 31a92d1a24a..d3882b40a39 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index d3356bc84ee..3c777363243 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f1ebd63e792..d1493f36e04 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1230,7 +1230,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1352,7 +1352,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1408,7 +1408,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1474,7 +1474,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1609,7 +1609,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2451,7 +2451,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4114,7 +4114,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5744,7 +5744,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 7522efe02e4..e815e1c73be 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5258,7 +5258,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5316,7 +5316,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 62ea1a00580..cfd0a223648 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -274,9 +276,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 49af245ed8f..a7ff6601054 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2916,7 +2916,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250905-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250905-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 96234188b0748f6c0fcef01a8af8d25cfe60fd9c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250905 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 122 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 565 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 792 insertions(+), 382 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 3de5687461c..bf823fb1cb2 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1439,6 +1488,7 @@ static void
 CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 					  List *tables, List *schemaidlist)
 {
+	StringInfo	pub_type;
 	Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
 
 	if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
@@ -1451,20 +1501,27 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Tables or sequences cannot be added to or dropped from %s publications.", pub_type->data)));
+
+	destroyStringInfo(pub_type);
 }
 
 /*
@@ -2018,19 +2075,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9fd48acb1f8..03c0913bf72 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10703,7 +10709,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10723,13 +10734,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10841,6 +10853,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19616,6 +19650,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index bea793456f9..7522efe02e4 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4477,6 +4477,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4507,9 +4508,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4525,6 +4531,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4545,6 +4552,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4596,52 +4605,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index e7a2d64f741..75f1e64eb02 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3354,6 +3354,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36b5b2457f9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6b20a4404b2..ec559146640 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3583,11 +3583,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 895ca87a0df..44c582bdc35 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  WITH clause parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +336,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +354,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +386,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +402,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +421,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +432,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +481,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +599,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +894,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1087,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1298,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1341,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1415,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1424,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1437,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1466,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1492,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1563,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1574,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1595,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1619,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1630,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1641,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1652,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1683,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1695,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1777,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1798,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1932,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1954,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f423061395..72e893bfd51 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a13e8162890..49af245ed8f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250905-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchtext/x-patch; charset=US-ASCII; name=v20250905-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From acf88bcb964fcaa45a52443f7377a1fae92f9482 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250905 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 82cf65fae73..5c757776afc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1602,8 +1602,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1657,8 +1657,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1682,12 +1682,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1699,8 +1699,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1712,10 +1712,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2312,17 +2312,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2380,13 +2380,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 03c0913bf72..6a5b226c906 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10984,7 +10984,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..68ee5670124 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250905-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250905-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From f18c90bb66d65443d5032f05ecb9385740afd51f Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:45:14 +0530
Subject: [PATCH v20250905 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  60 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 329 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 402 insertions(+), 114 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..dc46d24c05d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..5a8275d49ba 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,22 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +559,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c77fa0234bb..01d300d3cf4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 5c757776afc..344dfa8e894 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
 
-				AddSubscriptionRelState(subid, relid, table_state,
+
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -869,13 +903,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -883,7 +916,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -905,17 +939,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -933,34 +967,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -968,28 +1015,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1011,41 +1059,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1054,10 +1116,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1071,11 +1133,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1087,6 +1151,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1722,6 +1810,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1997,7 +2097,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2393,11 +2493,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2583,8 +2687,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2592,15 +2711,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
 	List	   *tablelist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2608,8 +2729,25 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
-	if (server_version >= 160000)
+	/* Get the list of tables and sequences from the publisher. */
+	if (server_version >= 190000)
+	{
+		tableRow[2] = INT2VECTOROID;
+
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
+						 "       FROM pg_class c\n"
+						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
+						 "                FROM pg_publication\n"
+						 "                WHERE pubname IN (%s)) AS gpt\n"
+						 "             ON gpt.relid = c.oid\n"
+						 "      UNION ALL\n"
+						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+						 "       FROM pg_catalog.pg_publication_sequences s\n"
+						 "        WHERE s.pubname IN (%s)",
+						 pub_names->data, pub_names->data);
+	}
+	else if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2638,7 +2776,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2651,7 +2789,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2667,22 +2805,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(tablelist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			tablelist = lappend(tablelist, relinfo);
 
 		ExecClearTuple(slot);
 	}
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 6a5b226c906..6a06044d5fb 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10989,6 +10989,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3c777363243..a2ba0cef007 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec559146640..4a638fbecc9 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62bcd9d921c..4660e42d775 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..3d6e31a0d6c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,9 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 68ee5670124..8d8487c2454 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,6 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a7ff6601054..a3f02884404 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2899,6 +2899,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20250905-0006-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250905-0006-New-worker-for-sequence-synchronization-du.patchDownload
From 23fdcb3f1fe436f3630e4c43c41885d1736d5971 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 5 Sep 2025 08:34:58 +0530
Subject: [PATCH v20250905 6/7] New worker for sequence synchronization during 
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |  44 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 751 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   |  88 +-
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1411 insertions(+), 192 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 5a8275d49ba..e67444b53d7 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 01d300d3cf4..9b6e3647cc4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 879c62bdccc..265ce487c27 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 344dfa8e894..bbab49ae9f1 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1074,7 +1074,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2074,7 +2074,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -2717,7 +2717,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	StringInfoData cmd;
 	TupleTableSlot *slot;
 	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
-	List	   *tablelist = NIL;
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
 	bool		check_relkind = (server_version >= 190000);
@@ -2729,25 +2729,8 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables and sequences from the publisher. */
-	if (server_version >= 190000)
-	{
-		tableRow[2] = INT2VECTOROID;
-
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
-						 "       FROM pg_class c\n"
-						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
-						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
-						 "                FROM pg_publication\n"
-						 "                WHERE pubname IN (%s)) AS gpt\n"
-						 "             ON gpt.relid = c.oid\n"
-						 "      UNION ALL\n"
-						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
-						 "       FROM pg_catalog.pg_publication_sequences s\n"
-						 "        WHERE s.pubname IN (%s)",
-						 pub_names->data, pub_names->data);
-	}
-	else if (server_version >= 160000)
+	/* Get the list of relations from the publisher */
+	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2764,7 +2747,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs,  c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2772,6 +2755,15 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2823,13 +2815,13 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		relinfo->relkind = relkind;
 
 		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
-			list_member_rangevar(tablelist, relinfo->rv))
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, relinfo);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2837,7 +2829,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index add2e2e066c..3add0aff35d 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1590,7 +1618,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1630,6 +1658,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..0028bd52077
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,751 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficent permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	int64		log_cnt;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+#define REMOTE_SEQ_COL_COUNT 12
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	hash_seq_init(&status, sequences_to_copy);
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size >= MAX_SEQUENCES_SYNC_PER_BATCH)
+				break;
+		}
+
+		if (batch_size == 0)
+		{
+			CommitTransactionCommand();
+			break;
+		}
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient pemission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	StartTransactionCommand();
+
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+
+		/* Clean up local catalog to prevent retry */
+		RemoveSubscriptionRel(MySubscription->oid, entry->localrelid);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+								entry->nspname, entry->seqname, MySubscription->name));
+	}
+
+	CommitTransactionCommand();
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequence sync worker sequences",
+									256, &ctl, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key,
+								HASH_ENTER, &found);
+		Assert(seq_entry != NULL);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+	hash_seq_init(&hash_seq, sequences_to_copy);
+	while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+	{
+		pfree(seq_entry->seqname);
+		pfree(seq_entry->nspname);
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index a2ba0cef007..2a820182a1a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d1493f36e04..6a946445e3b 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -689,6 +689,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1229,7 +1234,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1351,7 +1359,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1407,7 +1418,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1473,7 +1487,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1608,7 +1625,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2450,7 +2470,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3271,7 +3294,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 
 		SpinLockAcquire(&leader->relmutex);
 		oldestxmin = leader->oldest_nonremovable_xid;
@@ -4113,7 +4136,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5443,7 +5469,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5563,8 +5590,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5675,6 +5702,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5694,14 +5725,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5772,6 +5805,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5784,9 +5821,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index a157cec3c4d..6b1b01d4e13 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1901,7 +1901,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4660e42d775..dce0c2ce108 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 3d6e31a0d6c..4d20ebbaf4b 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f402b17295c..3a5fa8f8be1 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index cfd0a223648..bd668d308c2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -278,11 +286,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -347,15 +356,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..de47f39fdbb
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a3f02884404..505b3b6723f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250905-0007-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250905-0007-Documentation-for-sequence-synchronization.patchDownload
From 44bd8d0a7f9fa85f504824601b07db857383a88d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:35:21 +0530
Subject: [PATCH v20250905 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 249 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 +++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 +++++--
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 462 insertions(+), 68 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e9095bedf21..72d597097a3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8186,16 +8186,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8229,7 +8232,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8238,12 +8241,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a4b3e55ba5..6054a81b923 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..06d29966e23 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..8309ca4b039 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index fc314437311..51d0b389be5 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#322Chao Li
li.evan.chao@gmail.com
In reply to: vignesh C (#321)
Re: Logical Replication of sequences

On Sep 5, 2025, at 13:49, vignesh C <vignesh21@gmail.com> wrote:

On Fri, 5 Sept 2025 at 03:04, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Please rebase the patches as they conflict with current HEAD (due to
commit 6359989654).

Attached a rebased version of the patches.

Regards,
Vignesh
<v20250905-0001-Enhance-pg_get_sequence_data-function.patch><v20250905-0003-Reorganize-tablesync-Code-and-Introduce-sy.patch><v20250905-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patch><v20250905-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patch><v20250905-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patch><v20250905-0006-New-worker-for-sequence-synchronization-du.patch><v20250905-0007-Documentation-for-sequence-synchronization.patch>

A few small comments:

1 - 0001
```
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');

DROP SEQUENCE test_seq1;
```

As it shows log_cnt now, after calling pg_get_sequence_data(), I suggest add 8 nextval(), so that sequence goes to 11, and log_cnt should become to 22.

2 - 0002
```
-	if (schemaidlist && pubform->puballtables)
+	pub_type = makeStringInfo();
+
+	appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+					 pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				 errmsg("publication \"%s\" is defined as %s",
+						NameStr(pubform->pubname), pub_type->data),
+				 errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
```

Here you build a string at runtime and inject it into log message, which seems to break some PG rules. In one of my previous review, I raised a comment for removing some duplicate code using this way, and get this info: /messages/by-id/397c16a7-f57b-4f81-8497-6d692a9bf596@eisentraut.org

3 - 0005
```
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+		{
+			has_subrels = true;
+			break;
+		}
```
For publication, only valid relkind are RELKIND_RELATION, RELKIND_PARTITIONED_TABLE and newly added RELKIND_SEQUENCE. Here you want to check for table, using “!=  RELKIND_SEQUENCE” works. But I think doing “ kind == RELKIND_RELATION || kind == RELKIND_PARTITIONED_TABLE” is clearer and more reliable. Consider if some other kind is added, then “kind != RELKIND_SEQUENCE” might be broken, and hard to find the root cause.
4 -0006
```
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs,  c.relkind\n"
```
There is an extra whitespace before “c.relkind”.

Best regards,
--
Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/

#323vignesh C
vignesh21@gmail.com
In reply to: Masahiko Sawada (#320)
Re: Logical Replication of sequences

On Fri, 5 Sept 2025 at 04:01, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Aug 20, 2025 at 4:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Aug 18, 2025 at 3:36 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, the updated version has the changes for the same.

I wanted to first discuss a few design points. The patch implements
"ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES" such that it
copies the existing sequences values and also adds/removes any missing
sequences. For the second part (add/remove sequences), we already have
a separate command "ALTER SUBSCRIPTION ... REFRESH PUBLICATION". So, I
feel the new command should only copy the sequence values, as that
will keep the interface easy to define and understand. Additionally,
it will help to simplify the code in the patch, especially in the
function AlterSubscription_refresh.

While I agree that the new command just copies the sequence values,
I'm not sure the command should be implemented as an extension of
ALTER SUBSCRIPTION ... REFRESH PUBLICATION command. Probably what the
new command does is quite different from what REFRESH PUBLICATION
command does?

Alternatively, the syntax options could be:
ALTER SUBSCRIPTION subname RESYNC PUBLICATION SEQUENCES;
or
ALTER SUBSCRIPTION subname RESYNC SEQUENCES;

Do you have a preference between the two?

We previously discussed *not* to launch an apply worker if the
corresponding publication(s) only publish sequences. See [1]. We
should consider it again to see if that is a good idea. It will have
some drawbacks as compared to the current approach of doing sync via
sync worker. The command could take time for a large number of
sequences, and on failure, retry won't happen which can happen with
background workers. Additionally, when the connect option is false for
a subscription during creation, the user needs to later call REFRESH
to sync the sequences after enabling the subscription. OTOH, doing the
sync during the command will bring more predictability and simplify
the patch. What do others think?

It seems okay to me that we launch an apply worker for a subscription
corresponding to sequence-only publications. I think the situation
seems somewhat similar to the case where we launch an apply worker
even for a subscription corresponding to empty publications. It would
be quite a rare case in practice where publications have only
sequences.

I agree with this.

I guess that it would rather simplify the patch if we can
cut the part of doing the sync during the command (i.e., not
distinguish between table-and-sequence publications and sequence-only
publications), no?

Currently, we don’t make a distinction between the two. Just to
clarify, could you point me to the specific part of the patch you’re
referring to?

Regards,
Vignesh

#324Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#323)
Re: Logical Replication of sequences

On Fri, Sep 5, 2025 at 2:46 PM vignesh C <vignesh21@gmail.com> wrote:

On Fri, 5 Sept 2025 at 04:01, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Aug 20, 2025 at 4:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Aug 18, 2025 at 3:36 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, the updated version has the changes for the same.

I wanted to first discuss a few design points. The patch implements
"ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES" such that it
copies the existing sequences values and also adds/removes any missing
sequences. For the second part (add/remove sequences), we already have
a separate command "ALTER SUBSCRIPTION ... REFRESH PUBLICATION". So, I
feel the new command should only copy the sequence values, as that
will keep the interface easy to define and understand. Additionally,
it will help to simplify the code in the patch, especially in the
function AlterSubscription_refresh.

While I agree that the new command just copies the sequence values,
I'm not sure the command should be implemented as an extension of
ALTER SUBSCRIPTION ... REFRESH PUBLICATION command. Probably what the
new command does is quite different from what REFRESH PUBLICATION
command does?

Alternatively, the syntax options could be:
ALTER SUBSCRIPTION subname RESYNC PUBLICATION SEQUENCES;
or
ALTER SUBSCRIPTION subname RESYNC SEQUENCES;

The other option on these lines is to use SYNC instead of RESYNC as
for users RESYNC sounds more like redo where something has failed and
we are trying to do it again. Also, this will be used even for the
first time sync of sequences. I prefer the first alternative among
these as having the PUBLICATION keyword suggests that we are syncing
the sequences corresponding to all the publications that are part of
subscription but I am okay with the second too.

--
With Regards,
Amit Kapila.

#325vignesh C
vignesh21@gmail.com
In reply to: Chao Li (#322)
7 attachment(s)
Re: Logical Replication of sequences

On Fri, 5 Sept 2025 at 13:54, Chao Li <li.evan.chao@gmail.com> wrote:

On Sep 5, 2025, at 13:49, vignesh C <vignesh21@gmail.com> wrote:

On Fri, 5 Sept 2025 at 03:04, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Please rebase the patches as they conflict with current HEAD (due to
commit 6359989654).

Attached a rebased version of the patches.

Regards,
Vignesh
<v20250905-0001-Enhance-pg_get_sequence_data-function.patch><v20250905-0003-Reorganize-tablesync-Code-and-Introduce-sy.patch><v20250905-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patch><v20250905-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patch><v20250905-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patch><v20250905-0006-New-worker-for-sequence-synchronization-du.patch><v20250905-0007-Documentation-for-sequence-synchronization.patch>

A few small comments:

1 - 0001
```
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
SELECT nextval('test_seq1');
-- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');

DROP SEQUENCE test_seq1;
```

As it shows log_cnt now, after calling pg_get_sequence_data(), I suggest add 8 nextval(), so that sequence goes to 11, and log_cnt should become to 22.

Could you please explain the reason you’d like this to be done?

2 - 0002
```
- if (schemaidlist && pubform->puballtables)
+ pub_type = makeStringInfo();
+
+ appendStringInfo(pub_type, "%s", pubform->puballtables && pubform->puballsequences ? "FOR ALL TABLES, ALL SEQUENCES" :
+ pubform->puballtables ? "FOR ALL TABLES" : "FOR ALL SEQUENCES");
+
+ if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- errmsg("publication \"%s\" is defined as FOR ALL TABLES",
- NameStr(pubform->pubname)),
- errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+ errmsg("publication \"%s\" is defined as %s",
+ NameStr(pubform->pubname), pub_type->data),
+ errdetail("Schemas cannot be added to or dropped from %s publications.", pub_type->data)));
```

Here you build a string at runtime and inject it into log message, which seems to break some PG rules. In one of my previous review, I raised a comment for removing some duplicate code using this way, and get this info: /messages/by-id/397c16a7-f57b-4f81-8497-6d692a9bf596@eisentraut.org

Modified

3 - 0005
```
+ /*
+ * Skip sequence tuples. If even a single table tuple exists then the
+ * subscription has tables.
+ */
+ if (get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
+ {
+ has_subrels = true;
+ break;
+ }
```
For publication, only valid relkind are RELKIND_RELATION, RELKIND_PARTITIONED_TABLE and newly added RELKIND_SEQUENCE. Here you want to check for table, using “!=  RELKIND_SEQUENCE” works. But I think doing “ kind == RELKIND_RELATION || kind == RELKIND_PARTITIONED_TABLE” is clearer and more reliable. Consider if some other kind is added, then “kind != RELKIND_SEQUENCE” might be broken, and hard to find the root cause.

Modified

4 -0006
```
- appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+ appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs,  c.relkind\n"
```
There is an extra whitespace before “c.relkind”.

Modified

Thanks for the comments, the attached v20250908 version patch has the
changes for the same.

Regards,
Vignesh

Attachments:

v20250908-0001-Enhance-pg_get_sequence_data-function.patchapplication/octet-stream; name=v20250908-0001-Enhance-pg_get_sequence_data-function.patchDownload
From fb0d47c3d6568b545467172d6b335c2137b593a8 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250908 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 5 files changed, 48 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..879c62bdccc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 118d6da1ace..62bcd9d921c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..87433e508ca 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         10 | t         |      32 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250908-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchapplication/octet-stream; name=v20250908-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From d6f8a5124e85948fd728186db376845748664169 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250908 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 82cf65fae73..5c757776afc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1602,8 +1602,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1657,8 +1657,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1682,12 +1682,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1699,8 +1699,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1712,10 +1712,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2312,17 +2312,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2380,13 +2380,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 03c0913bf72..6a5b226c906 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10984,7 +10984,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..68ee5670124 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250908-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250908-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 90c9e06283fd9e677631690ce3a884f0165aaaad Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:59:39 +0530
Subject: [PATCH v20250908 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 31a92d1a24a..d3882b40a39 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index d3356bc84ee..3c777363243 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f1ebd63e792..d1493f36e04 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1230,7 +1230,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1352,7 +1352,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1408,7 +1408,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1474,7 +1474,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1609,7 +1609,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2451,7 +2451,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4114,7 +4114,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5744,7 +5744,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 7522efe02e4..e815e1c73be 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5258,7 +5258,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5316,7 +5316,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 62ea1a00580..cfd0a223648 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -274,9 +276,13 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 extern bool AllTablesyncsReady(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 49af245ed8f..a7ff6601054 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2916,7 +2916,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250908-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250908-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 2b00e3b6087ccf0c51b531929084f093099151e9 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250908 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 152 ++++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 565 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 818 insertions(+), 386 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 3de5687461c..a0d3ff0d9ef 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1451,20 +1500,50 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
-		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
+	{
+		if (pubform->puballtables && pubform->puballsequences)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+							NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES, ALL SEQUENCES publications."));
+		else if (pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+							NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications."));
+		else
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+							NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL SEQUENCES publications."));
+	}
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
-		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+	if (tables && (pubform->puballtables || pubform->puballsequences))
+	{
+		if (pubform->puballtables && pubform->puballsequences)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+							NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL TABLES, ALL SEQUENCES publications."));
+		else if (pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+							NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications."));
+		else
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+							NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL SEQUENCES publications."));
+	}
 }
 
 /*
@@ -2018,19 +2097,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9fd48acb1f8..03c0913bf72 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10703,7 +10709,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10723,13 +10734,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10841,6 +10853,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19616,6 +19650,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index bea793456f9..7522efe02e4 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4477,6 +4477,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4507,9 +4508,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4525,6 +4531,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4545,6 +4552,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4596,52 +4605,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index e7a2d64f741..75f1e64eb02 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3354,6 +3354,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36b5b2457f9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6b20a4404b2..ec559146640 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3583,11 +3583,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 895ca87a0df..44c582bdc35 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  WITH clause parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +336,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +354,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +386,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +402,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +421,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +432,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +481,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +599,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +894,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1087,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1298,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1341,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1415,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1424,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1437,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1466,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1492,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1563,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1574,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1595,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1619,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1630,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1641,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1652,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1683,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1695,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1777,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1798,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1932,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1954,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f423061395..72e893bfd51 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a13e8162890..49af245ed8f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250908-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250908-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 5da9b1b8cdba2f1577e98b8b9f5d40e84e20621e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:45:14 +0530
Subject: [PATCH v20250908 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 329 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 403 insertions(+), 114 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..dc46d24c05d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..c77e4eae718 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +584,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +609,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c77fa0234bb..01d300d3cf4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 5c757776afc..344dfa8e894 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
 
-				AddSubscriptionRelState(subid, relid, table_state,
+
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -869,13 +903,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -883,7 +916,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -905,17 +939,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -933,34 +967,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -968,28 +1015,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1011,41 +1059,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1054,10 +1116,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1071,11 +1133,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1087,6 +1151,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1722,6 +1810,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1997,7 +2097,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2393,11 +2493,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2583,8 +2687,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2592,15 +2711,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
 	List	   *tablelist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2608,8 +2729,25 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
-	if (server_version >= 160000)
+	/* Get the list of tables and sequences from the publisher. */
+	if (server_version >= 190000)
+	{
+		tableRow[2] = INT2VECTOROID;
+
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
+						 "       FROM pg_class c\n"
+						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
+						 "                FROM pg_publication\n"
+						 "                WHERE pubname IN (%s)) AS gpt\n"
+						 "             ON gpt.relid = c.oid\n"
+						 "      UNION ALL\n"
+						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+						 "       FROM pg_catalog.pg_publication_sequences s\n"
+						 "        WHERE s.pubname IN (%s)",
+						 pub_names->data, pub_names->data);
+	}
+	else if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2638,7 +2776,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2651,7 +2789,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2667,22 +2805,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(tablelist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			tablelist = lappend(tablelist, relinfo);
 
 		ExecClearTuple(slot);
 	}
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 6a5b226c906..6a06044d5fb 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10989,6 +10989,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 3c777363243..a2ba0cef007 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec559146640..4a638fbecc9 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62bcd9d921c..4660e42d775 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12282,6 +12282,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..3d6e31a0d6c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,9 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 68ee5670124..8d8487c2454 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,6 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a7ff6601054..a3f02884404 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2899,6 +2899,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20250908-0006-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250908-0006-New-worker-for-sequence-synchronization-du.patchDownload
From 33129db7c18a96d0f611b1fb79c68b2518ddd558 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 5 Sep 2025 08:34:58 +0530
Subject: [PATCH v20250908 6/7] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |  44 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 752 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   |  88 +-
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1412 insertions(+), 192 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index c77e4eae718..961d255be54 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 01d300d3cf4..9b6e3647cc4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 879c62bdccc..265ce487c27 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 344dfa8e894..b67e7fd8e47 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1074,7 +1074,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2074,7 +2074,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -2717,7 +2717,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	StringInfoData cmd;
 	TupleTableSlot *slot;
 	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
-	List	   *tablelist = NIL;
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
 	bool		check_relkind = (server_version >= 190000);
@@ -2729,25 +2729,8 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables and sequences from the publisher. */
-	if (server_version >= 190000)
-	{
-		tableRow[2] = INT2VECTOROID;
-
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
-						 "       FROM pg_class c\n"
-						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
-						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
-						 "                FROM pg_publication\n"
-						 "                WHERE pubname IN (%s)) AS gpt\n"
-						 "             ON gpt.relid = c.oid\n"
-						 "      UNION ALL\n"
-						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
-						 "       FROM pg_catalog.pg_publication_sequences s\n"
-						 "        WHERE s.pubname IN (%s)",
-						 pub_names->data, pub_names->data);
-	}
-	else if (server_version >= 160000)
+	/* Get the list of relations from the publisher */
+	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2764,7 +2747,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2772,6 +2755,15 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2823,13 +2815,13 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		relinfo->relkind = relkind;
 
 		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
-			list_member_rangevar(tablelist, relinfo->rv))
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, relinfo);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2837,7 +2829,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index add2e2e066c..3add0aff35d 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1590,7 +1618,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1630,6 +1658,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..5ecce367e1f
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,752 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 12
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficent permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	int64		log_cnt;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	hash_seq_init(&status, sequences_to_copy);
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size >= MAX_SEQUENCES_SYNC_PER_BATCH)
+				break;
+		}
+
+		if (batch_size == 0)
+		{
+			CommitTransactionCommand();
+			break;
+		}
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient pemission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	StartTransactionCommand();
+
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+
+		/* Clean up local catalog to prevent retry */
+		RemoveSubscriptionRel(MySubscription->oid, entry->localrelid);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+								entry->nspname, entry->seqname, MySubscription->name));
+	}
+
+	CommitTransactionCommand();
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequence sync worker sequences",
+									256, &ctl, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key,
+								HASH_ENTER, &found);
+		Assert(seq_entry != NULL);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+	hash_seq_init(&hash_seq, sequences_to_copy);
+	while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+	{
+		pfree(seq_entry->seqname);
+		pfree(seq_entry->nspname);
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index a2ba0cef007..2a820182a1a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d1493f36e04..6a946445e3b 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -689,6 +689,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1229,7 +1234,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1351,7 +1359,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1407,7 +1418,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1473,7 +1487,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1608,7 +1625,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2450,7 +2470,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3271,7 +3294,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 
 		SpinLockAcquire(&leader->relmutex);
 		oldestxmin = leader->oldest_nonremovable_xid;
@@ -4113,7 +4136,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5443,7 +5469,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5563,8 +5590,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5675,6 +5702,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5694,14 +5725,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5772,6 +5805,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5784,9 +5821,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 0da01627cfe..f96cc78c3ac 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1907,7 +1907,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4660e42d775..dce0c2ce108 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5688,9 +5688,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 3d6e31a0d6c..4d20ebbaf4b 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f402b17295c..3a5fa8f8be1 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index cfd0a223648..bd668d308c2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -278,11 +286,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -347,15 +356,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 586ffba434e..a6c267a8a2c 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -42,6 +42,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..de47f39fdbb
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a3f02884404..505b3b6723f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250908-0007-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250908-0007-Documentation-for-sequence-synchronization.patchDownload
From d74369c244019c25e8342684f189763aadd684aa Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:35:21 +0530
Subject: [PATCH v20250908 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 249 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 +++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 +++++--
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 462 insertions(+), 68 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e9095bedf21..72d597097a3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8186,16 +8186,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8229,7 +8232,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8238,12 +8241,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 2a3685f474a..98fe71aef10 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..06d29966e23 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..8309ca4b039 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index fc314437311..51d0b389be5 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#326Chao Li
li.evan.chao@gmail.com
In reply to: vignesh C (#325)
Re: Logical Replication of sequences

On Sep 8, 2025, at 14:00, vignesh C <vignesh21@gmail.com> wrote:

1 - 0001
```
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
SELECT nextval('test_seq1');
-- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');

DROP SEQUENCE test_seq1;
```

As it shows log_cnt now, after calling pg_get_sequence_data(), I suggest add 8 nextval(), so that sequence goes to 11, and log_cnt should become to 22.

Could you please explain the reason you’d like this to be done?

Because log_cnt is newly exposed, we want to verify its value in the test. When I first time ran the test code, I saw initial value of log_cnt was 32, then I thought log_cnt might get decreased if I ran nextval() again, but it didn’t. Only after I ran 10 (cache size) more nextval(), log_cnt got decreased by 10 to 22. The test code is a place for people to look for expected behavior. So I think adding more nextval() to verify and show the change of log_cnt is helpful.

Best regards,
--
Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/

#327vignesh C
vignesh21@gmail.com
In reply to: Chao Li (#326)
Re: Logical Replication of sequences

On Mon, 8 Sept 2025 at 12:05, Chao Li <li.evan.chao@gmail.com> wrote:

On Sep 8, 2025, at 14:00, vignesh C <vignesh21@gmail.com> wrote:

1 - 0001
```
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
SELECT nextval('test_seq1');
-- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');

DROP SEQUENCE test_seq1;
```

As it shows log_cnt now, after calling pg_get_sequence_data(), I suggest add 8 nextval(), so that sequence goes to 11, and log_cnt should become to 22.

Could you please explain the reason you’d like this to be done?

Because log_cnt is newly exposed, we want to verify its value in the test. When I first time ran the test code, I saw initial value of log_cnt was 32, then I thought log_cnt might get decreased if I ran nextval() again, but it didn’t. Only after I ran 10 (cache size) more nextval(), log_cnt got decreased by 10 to 22. The test code is a place for people to look for expected behavior. So I think adding more nextval() to verify and show the change of log_cnt is helpful.

Thanks, I will include this in the next version.

Regards,
Vignesh

#328vignesh C
vignesh21@gmail.com
In reply to: Chao Li (#326)
7 attachment(s)
Re: Logical Replication of sequences

On Mon, 8 Sept 2025 at 12:05, Chao Li <li.evan.chao@gmail.com> wrote:

On Sep 8, 2025, at 14:00, vignesh C <vignesh21@gmail.com> wrote:

1 - 0001
```
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
SELECT nextval('test_seq1');
-- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');

DROP SEQUENCE test_seq1;
```

As it shows log_cnt now, after calling pg_get_sequence_data(), I suggest add 8 nextval(), so that sequence goes to 11, and log_cnt should become to 22.

Could you please explain the reason you’d like this to be done?

Because log_cnt is newly exposed, we want to verify its value in the test. When I first time ran the test code, I saw initial value of log_cnt was 32, then I thought log_cnt might get decreased if I ran nextval() again, but it didn’t. Only after I ran 10 (cache size) more nextval(), log_cnt got decreased by 10 to 22. The test code is a place for people to look for expected behavior. So I think adding more nextval() to verify and show the change of log_cnt is helpful.

This is addressed in the attached patch, also rebased the patch
because of recent commits.

Regards,
Vignesh

Attachments:

v20250915-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20250915-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 9770057fb6c88a271d840f1463d0635bffef1838 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250915 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 152 ++++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 565 +++++++++++++---------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 818 insertions(+), 386 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 3de5687461c..a0d3ff0d9ef 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("WITH clause parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1451,20 +1500,50 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
-		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
+	{
+		if (pubform->puballtables && pubform->puballsequences)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+							NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES, ALL SEQUENCES publications."));
+		else if (pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+							NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications."));
+		else
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+							NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL SEQUENCES publications."));
+	}
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
-		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+	if (tables && (pubform->puballtables || pubform->puballsequences))
+	{
+		if (pubform->puballtables && pubform->puballsequences)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+							NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL TABLES, ALL SEQUENCES publications."));
+		else if (pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+							NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications."));
+		else
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+							NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL SEQUENCES publications."));
+	}
 }
 
 /*
@@ -2018,19 +2097,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9fd48acb1f8..03c0913bf72 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10703,7 +10709,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10723,13 +10734,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10841,6 +10853,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19616,6 +19650,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index b4c45ad803e..2dad2dd19f5 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4531,6 +4531,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4561,9 +4562,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4579,6 +4585,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4599,6 +4606,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4650,52 +4659,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index e7a2d64f741..75f1e64eb02 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3354,6 +3354,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36b5b2457f9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6b20a4404b2..ec559146640 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3583,11 +3583,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 895ca87a0df..44c582bdc35 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  WITH clause parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  WITH clause parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +336,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +354,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +386,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +402,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +421,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +432,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +481,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +599,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +894,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1087,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1298,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1341,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1415,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1424,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1437,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1466,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1492,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1563,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1574,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1595,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1619,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1630,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1641,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1652,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1683,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1695,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1777,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1798,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1932,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1954,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f423061395..72e893bfd51 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a13e8162890..49af245ed8f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250915-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20250915-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 12357fb6e11af51198aefcc9c8e9bc8cc12b61bd Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 15 Sep 2025 12:08:03 +0530
Subject: [PATCH v20250915 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 ++++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 186 ++---------------
 src/backend/replication/logical/worker.c      |  18 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  12 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 234 insertions(+), 190 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 31a92d1a24a..d3882b40a39 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e6da4028d39..1e01053b955 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index ee6ac22329f..6b885bca5e0 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1229,7 +1229,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1351,7 +1351,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1407,7 +1407,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1473,7 +1473,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1608,7 +1608,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2450,7 +2450,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4119,7 +4119,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -5766,7 +5766,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 2dad2dd19f5..61279b6860f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5312,7 +5312,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5370,7 +5370,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index de003802612..3018761d446 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -275,9 +277,13 @@ extern bool AllTablesyncsReady(void);
 extern bool HasSubscriptionRelationsCached(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 49af245ed8f..a7ff6601054 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2916,7 +2916,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250915-0006-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20250915-0006-New-worker-for-sequence-synchronization-du.patchDownload
From 7f082d17aa468f95efa92c7aa7ef27005e8386eb Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 15 Sep 2025 12:11:20 +0530
Subject: [PATCH v20250915 6/7] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |  44 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  59 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 752 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   | 101 +--
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1413 insertions(+), 204 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index c77e4eae718..961d255be54 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 01d300d3cf4..9b6e3647cc4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 879c62bdccc..265ce487c27 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 8c60f7a5011..5c860139933 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1084,7 +1084,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2084,7 +2084,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -2727,7 +2727,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	StringInfoData cmd;
 	TupleTableSlot *slot;
 	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
-	List	   *tablelist = NIL;
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
 	bool		check_relkind = (server_version >= 190000);
@@ -2739,25 +2739,8 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables and sequences from the publisher. */
-	if (server_version >= 190000)
-	{
-		tableRow[2] = INT2VECTOROID;
-
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
-						 "       FROM pg_class c\n"
-						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
-						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
-						 "                FROM pg_publication\n"
-						 "                WHERE pubname IN (%s)) AS gpt\n"
-						 "             ON gpt.relid = c.oid\n"
-						 "      UNION ALL\n"
-						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
-						 "       FROM pg_catalog.pg_publication_sequences s\n"
-						 "        WHERE s.pubname IN (%s)",
-						 pub_names->data, pub_names->data);
-	}
-	else if (server_version >= 160000)
+	/* Get the list of relations from the publisher */
+	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2774,7 +2757,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2782,6 +2765,15 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2833,13 +2825,13 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		relinfo->relkind = relkind;
 
 		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
-			list_member_rangevar(tablelist, relinfo->rv))
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, relinfo);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2847,7 +2839,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index add2e2e066c..3add0aff35d 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,7 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY, false);
 			LWLockRelease(LogicalRepWorkerLock);
 
 			if (w != NULL)
@@ -1590,7 +1618,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1630,6 +1658,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..5ecce367e1f
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,752 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 12
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficent permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	int64		log_cnt;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	hash_seq_init(&status, sequences_to_copy);
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size >= MAX_SEQUENCES_SYNC_PER_BATCH)
+				break;
+		}
+
+		if (batch_size == 0)
+		{
+			CommitTransactionCommand();
+			break;
+		}
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient pemission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	StartTransactionCommand();
+
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+
+		/* Clean up local catalog to prevent retry */
+		RemoveSubscriptionRel(MySubscription->oid, entry->localrelid);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+								entry->nspname, entry->seqname, MySubscription->name));
+	}
+
+	CommitTransactionCommand();
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequence sync worker sequences",
+									256, &ctl, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key,
+								HASH_ENTER, &found);
+		Assert(seq_entry != NULL);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+	hash_seq_init(&hash_seq, sequences_to_copy);
+	while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+	{
+		pfree(seq_entry->seqname);
+		pfree(seq_entry->nspname);
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8f7270ed0ff..0a683bd37a3 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1645,19 +1615,8 @@ AllTablesyncsReady(void)
 bool
 HasSubscriptionRelationsCached(void)
 {
-	bool		started_tx;
-	bool		has_subrels;
-
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchTableStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	return has_subrels;
+	return FetchRelationStates(NULL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6b885bca5e0..b1616a31b6b 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -688,6 +688,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1228,7 +1233,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1350,7 +1358,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1406,7 +1417,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1472,7 +1486,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1607,7 +1624,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2449,7 +2469,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3270,7 +3293,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4118,7 +4141,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5465,7 +5491,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5585,8 +5612,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5697,6 +5724,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5716,14 +5747,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5794,6 +5827,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5806,9 +5843,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 6bc6be13d2a..f5a09c0f536 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1907,7 +1907,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 34409796b85..45dd10cd9ee 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5700,9 +5700,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 3d6e31a0d6c..4d20ebbaf4b 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f402b17295c..3a5fa8f8be1 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 3018761d446..dd0ed7c73b1 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..de47f39fdbb
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a3f02884404..505b3b6723f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250915-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchapplication/octet-stream; name=v20250915-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From cce7b7d377de557163c43222e214fb27323445cb Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250915 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 750d262fcca..1413cf5c9cc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1612,8 +1612,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1667,8 +1667,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1692,12 +1692,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1709,8 +1709,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1722,10 +1722,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2322,17 +2322,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2390,13 +2390,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 03c0913bf72..6a5b226c906 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10984,7 +10984,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..68ee5670124 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250915-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchapplication/octet-stream; name=v20250915-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From a679839c1e82161fcc74a5df7ddfb95e7c82a1d3 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:45:14 +0530
Subject: [PATCH v20250915 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 329 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 403 insertions(+), 114 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..dc46d24c05d 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..c77e4eae718 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false get all tables, otherwise,
+ * only get tables that have not reached READY state.
+ * If getting sequences and not_ready is false get all sequences,
+ * otherwise, only get sequences that have not reached READY state (i.e. are
+ * still in INIT state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +584,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +609,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c77fa0234bb..01d300d3cf4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1413cf5c9cc..8c60f7a5011 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
 
-				AddSubscriptionRelState(subid, relid, table_state,
+
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +913,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +926,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +949,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -943,34 +977,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -978,28 +1025,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1069,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1126,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,11 +1143,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1097,6 +1161,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1732,6 +1820,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -2007,7 +2107,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2403,11 +2503,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2593,8 +2697,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2602,15 +2721,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
 	List	   *tablelist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2618,8 +2739,25 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
-	if (server_version >= 160000)
+	/* Get the list of tables and sequences from the publisher. */
+	if (server_version >= 190000)
+	{
+		tableRow[2] = INT2VECTOROID;
+
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
+						 "       FROM pg_class c\n"
+						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
+						 "                FROM pg_publication\n"
+						 "                WHERE pubname IN (%s)) AS gpt\n"
+						 "             ON gpt.relid = c.oid\n"
+						 "      UNION ALL\n"
+						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+						 "       FROM pg_catalog.pg_publication_sequences s\n"
+						 "        WHERE s.pubname IN (%s)",
+						 pub_names->data, pub_names->data);
+	}
+	else if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2648,7 +2786,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2661,7 +2799,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2677,22 +2815,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(tablelist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			tablelist = lappend(tablelist, relinfo);
 
 		ExecClearTuple(slot);
 	}
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 6a5b226c906..6a06044d5fb 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10989,6 +10989,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 1e01053b955..8f7270ed0ff 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec559146640..4a638fbecc9 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9fa55923bb7..34409796b85 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12294,6 +12294,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..3d6e31a0d6c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,9 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 68ee5670124..8d8487c2454 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,6 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a7ff6601054..a3f02884404 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2899,6 +2899,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20250915-0007-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20250915-0007-Documentation-for-sequence-synchronization.patchDownload
From 6d2a99953e1415c4028758c44be0965ae2d6a357 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:35:21 +0530
Subject: [PATCH v20250915 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 249 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 +++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 +++++--
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 doc/src/sgml/system-views.sgml            |  66 ++++++
 8 files changed, 462 insertions(+), 68 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e9095bedf21..72d597097a3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8186,16 +8186,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8229,7 +8232,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8238,12 +8241,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e9b420f3ddb..7138de1acb8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..06d29966e23 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..8309ca4b039 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index fc314437311..51d0b389be5 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250915-0001-Enhance-pg_get_sequence_data-function.patchapplication/octet-stream; name=v20250915-0001-Enhance-pg_get_sequence_data-function.patchDownload
From fa7e0507ac0400e43a5eea9d39d639ffce3064ac Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250915 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 +++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out | 28 ++++++++------------------
 src/test/regress/sql/sequence.sql      |  6 ++----
 5 files changed, 53 insertions(+), 30 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..879c62bdccc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 03e82d28c87..9fa55923bb7 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..aa53442c561 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -821,29 +821,17 @@ DROP USER regress_seq_user;
 DROP SEQUENCE seq;
 -- cache tests
 CREATE SEQUENCE test_seq1 CACHE 10;
-SELECT nextval('test_seq1');
- nextval 
----------
-       1
-(1 row)
-
-SELECT nextval('test_seq1');
- nextval 
----------
-       2
-(1 row)
-
-SELECT nextval('test_seq1');
- nextval 
----------
-       3
+SELECT max(nextval('test_seq1')) from generate_series(1,11);
+ max 
+-----
+  11
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn 
+------------+-----------+---------+-----
+         20 | t         |      22 | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..eb2eefec3ba 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -409,11 +409,9 @@ DROP SEQUENCE seq;
 
 -- cache tests
 CREATE SEQUENCE test_seq1 CACHE 10;
-SELECT nextval('test_seq1');
-SELECT nextval('test_seq1');
-SELECT nextval('test_seq1');
+SELECT max(nextval('test_seq1')) from generate_series(1,11);
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

#329shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#328)
Re: Logical Replication of sequences

On Mon, Sep 15, 2025 at 2:36 PM vignesh C <vignesh21@gmail.com> wrote:

This is addressed in the attached patch, also rebased the patch
because of recent commits.

One of the patches conflict with recent commit 0d48d393 (launcher.c
changes) and thus needs a rebase.

thanks
Shveta

#330Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: vignesh C (#328)
Re: Logical Replication of sequences

On Mon, 15 Sept 2025 at 14:36, vignesh C <vignesh21@gmail.com> wrote:

On Mon, 8 Sept 2025 at 12:05, Chao Li <li.evan.chao@gmail.com> wrote:

On Sep 8, 2025, at 14:00, vignesh C <vignesh21@gmail.com> wrote:

1 - 0001
```
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
SELECT nextval('test_seq1');
-- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');

DROP SEQUENCE test_seq1;
```

As it shows log_cnt now, after calling pg_get_sequence_data(), I suggest add 8 nextval(), so that sequence goes to 11, and log_cnt should become to 22.

Could you please explain the reason you’d like this to be done?

Because log_cnt is newly exposed, we want to verify its value in the test. When I first time ran the test code, I saw initial value of log_cnt was 32, then I thought log_cnt might get decreased if I ran nextval() again, but it didn’t. Only after I ran 10 (cache size) more nextval(), log_cnt got decreased by 10 to 22. The test code is a place for people to look for expected behavior. So I think adding more nextval() to verify and show the change of log_cnt is helpful.

This is addressed in the attached patch, also rebased the patch
because of recent commits.

Hi Vignesh,

FYI: the patches are not applying on current HEAD.

I have reviewed the patches and here are my comments:

1. For patch 0001:
The 'log_cnt' column of 'pg_get_sequence_data' gets reset after a checkpoint.
For example:
postgres=# select * from pg_get_sequence_data('seq1');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
3 | t | 31 | 0/0177C800
(1 row)
postgres=# checkpoint;
CHECKPOINT
postgres=# select nextval('seq1');
nextval
---------
4
(1 row)
postgres=# select * from pg_get_sequence_data('seq1');
last_value | is_called | log_cnt | page_lsn
------------+-----------+---------+------------
4 | t | 32 | 0/0177C998

So, for tests:
+SELECT last_value, is_called, log_cnt, page_lsn <=
pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt | lsn
+------------+-----------+---------+-----
+         20 | t         |      22 | t

Is there a possibility that it can show a different value of "log_cnt"
due checkpoint running in background or parallel test?

I see following comment in the similar test:
-- log_cnt can be higher if there is a checkpoint just at the right
-- time, so just test for the expected range
SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM
foo_seq_new;

Thoughts?

2. For patch 0002:
I created a publication pub1 for all sequences, and now altering with
set give following error:
postgres=# alter publication pub1 SET (publish_via_partition_root);
ERROR: WITH clause parameters are not supported for publications
defined as FOR ALL SEQUENCES

I think we should not use "WITH clause parameters" for "ALTER
PUBLICATION ... SET .." command.

3. For patch 0003:
We need to update the function 'HasSubscriptionRelationsCached'.
It has function call "has_subrels = FetchTableStates(&started_tx)"

I think FetchTableStates should be updated toFetchRelationStates.
This function call was added in the recent commit [1]https://github.com/postgres/postgres/commit/1f7e9ba3ac4eff13041abcc4c9c517ad835fa449.

4. For patch 0007:
We should update the logical-replication.sgml for the occurrence of with \dRp+:

<programlisting><![CDATA[
/* pub # */ \dRp+
Publication p1
Owner | All tables | Inserts | Updates | Deletes | Truncates |
Generated columns | Via root
----------+------------+---------+---------+---------+-----------+-------------------+----------
postgres | f | t | t | t | t |
none | f
Tables:
"public.t1" WHERE ((a > 5) AND (c = 'NSW'::text))

Publication p2
Owner | All tables | Inserts | Updates | Deletes | Truncates |
Generated columns | Via root
----------+------------+---------+---------+---------+-----------+-------------------+----------
postgres | f | t | t | t | t |
none | f
Tables:
"public.t1"
"public.t2" WHERE (e = 99)

Publication p3
Owner | All tables | Inserts | Updates | Deletes | Truncates |
Generated columns | Via root
----------+------------+---------+---------+---------+-----------+-------------------+----------
postgres | f | t | t | t | t |
none | f

[1]: https://github.com/postgres/postgres/commit/1f7e9ba3ac4eff13041abcc4c9c517ad835fa449

Thanks,
Shlok Kyal

#331shveta malik
shveta.malik@gmail.com
In reply to: Shlok Kyal (#330)
Re: Logical Replication of sequences

Few comments:

1)
The message of patch001 says:
----
When a sequence is synchronized to the subscriber, the page LSN of the
sequence from the publisher is also captured and stored in
pg_subscription_rel.srsublsn. This LSN will reflect the state of the
sequence at the time of synchronization. By comparing the current LSN
of the sequence on the publisher (via pg_sequence_state()) with the
stored LSN on the subscriber, users can detect if the sequence has
advanced and is now out-of-sync. This comparison will help determine
whether re-synchronization is needed for a given sequence.
----

I am unsure if pg_subscription_rel.srsublsn can help diagnose thatseq
is out-of-sync. The page-lsn can be the same but the sequence-values
can still be unsynchronized. Same page-lsn does not necessarily mean
synchronized sequences.

patch002:
2)
+ if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
+ {
+ if (pubform->puballtables && pubform->puballsequences)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL
TABLES, ALL SEQUENCES publications."));
+ else if (pubform->puballtables)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+ NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES
publications."));
+ else
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL
SEQUENCES publications."));
+ }

Do you think we can make it as:

if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
{
ereport(ERROR,
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("Schemas cannot be added to or dropped from publication
defined for ALL TABLES, ALL SEQUENCES, or both"));
}

IMO, a generic message such as above is good enough.
Same is applicable to the next 'Tables or sequences' message.

patch003:
3)
/*
* Return whether the subscription currently has any relations.
*
* Note: Unlike HasSubscriptionRelations(), this function relies on cached
* information for subscription relations. Additionally, it should not be
* invoked outside of apply or tablesync workers, as MySubscription must be
* initialized first.
*/
bool
HasSubscriptionRelationsCached(void)
{
/* We need up-to-date subscription tables info here */
return FetchRelationStates(NULL);
}

a) The comment mentions old function name HasSubscriptionRelations()
b) I think this function only worries about tables as we are passing
has_pending_sequences as NULL.

So does the comment and function name need amendments from relation to table?

patch005:
4)
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.

a) SEQEUNCES --> SEQUENCES

b) We may say (omit FOR):
This is not applicable to ALL SEQUENCES publication.

5)
* If getting tables and not_ready is false get all tables, otherwise,
* only get tables that have not reached READY state.
* If getting sequences and not_ready is false get all sequences,
* otherwise, only get sequences that have not reached READY state (i.e. are
* still in INIT state).

Shall we rephrase to:
/*
* If getting tables and not_ready is false, retrieve all tables;
* otherwise, retrieve only tables that have not reached the READY state.
*
* If getting sequences and not_ready is false, retrieve all sequences;
* otherwise, retrieve only sequences that are still in the INIT state
* (i.e., have not reached the READY state).
*/

Reviewing further..

thanks
Shveta

#332Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: vignesh C (#328)
Re: Logical Replication of sequences

On Mon, 15 Sept 2025 at 14:36, vignesh C <vignesh21@gmail.com> wrote:

On Mon, 8 Sept 2025 at 12:05, Chao Li <li.evan.chao@gmail.com> wrote:

On Sep 8, 2025, at 14:00, vignesh C <vignesh21@gmail.com> wrote:

1 - 0001
```
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
SELECT nextval('test_seq1');
-- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');

DROP SEQUENCE test_seq1;
```

As it shows log_cnt now, after calling pg_get_sequence_data(), I suggest add 8 nextval(), so that sequence goes to 11, and log_cnt should become to 22.

Could you please explain the reason you’d like this to be done?

Because log_cnt is newly exposed, we want to verify its value in the test. When I first time ran the test code, I saw initial value of log_cnt was 32, then I thought log_cnt might get decreased if I ran nextval() again, but it didn’t. Only after I ran 10 (cache size) more nextval(), log_cnt got decreased by 10 to 22. The test code is a place for people to look for expected behavior. So I think adding more nextval() to verify and show the change of log_cnt is helpful.

This is addressed in the attached patch, also rebased the patch
because of recent commits.

Hi Vignesh,

Here are some more review comments:

For patch 0006:

1. Spelling mistake in:
+ appendStringInfo(combined_error_detail, "Insufficent permission for
sequence(s): (%s).",
+ insuffperm_seqs->data);

Insufficent -> Insufficient

2. Spelling mistake in:
+ ereport(LOG,
+ errmsg("logical replication sequence synchronization for
subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d
skipped, %d mismatched, %d insufficient pemission, %d missing, ",
+    MySubscription->name, (current_index /
MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+    batch_succeeded_count, batch_skipped_count,
batch_mismatched_count, batch_insuffperm_count,
+    batch_size - (batch_succeeded_count + batch_skipped_count +
batch_mismatched_count + batch_insuffperm_count)));

pemission -> permission

3. I ran the ALTER SUBSCRIPTION .. REFRESH PUBLICATION command and
DROP SEQUENCE command and got a warning for "leaked hash_seq_search
scan". Is it expected?

2025-09-17 19:06:48.663 IST [2995060] LOG: logical replication
sequence synchronization worker for subscription "sub1" has started
2025-09-17 19:06:48.677 IST [2995060] LOG: logical replication
sequence synchronization for subscription "sub1" - total
unsynchronized: 0
2025-09-17 19:06:48.677 IST [2995060] WARNING: leaked hash_seq_search
scan for hash table 0x62b0a61d3450
2025-09-17 19:06:48.677 IST [2995060] LOG: logical replication
sequence synchronization worker for subscription "sub1" has finished

Steps to reproduce:
1. create publication for ALL SEQUENCES and create a subscription for
the publication on another node.
2. create sequence s1 both on publisher and subscriber.
3. Attach gdb on a psql terminal and add breakpoint at "line of
function call to AddSubscriptionRelState" inside function
AlterSubscription_refresh and run ALTER PUBLICATION .. REFRESH
PUBLICATION command on psql terminal
4. DROP sequence s1 on another terminal(for subscriber)
5. Continue the gdb
We will get the above warning.

Thanks,
Shlok Kyal

#333vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#331)
7 attachment(s)
Re: Logical Replication of sequences

On Wed, 17 Sept 2025 at 10:06, shveta malik <shveta.malik@gmail.com> wrote:

Few comments:

1)
The message of patch001 says:
----
When a sequence is synchronized to the subscriber, the page LSN of the
sequence from the publisher is also captured and stored in
pg_subscription_rel.srsublsn. This LSN will reflect the state of the
sequence at the time of synchronization. By comparing the current LSN
of the sequence on the publisher (via pg_sequence_state()) with the
stored LSN on the subscriber, users can detect if the sequence has
advanced and is now out-of-sync. This comparison will help determine
whether re-synchronization is needed for a given sequence.
----

I am unsure if pg_subscription_rel.srsublsn can help diagnose thatseq
is out-of-sync. The page-lsn can be the same but the sequence-values
can still be unsynchronized. Same page-lsn does not necessarily mean
synchronized sequences.

Currently we don't WAL log every sequence change, it happens once in
32 changes. I felt this was fine. Do you want anything additionally to
be included?

patch002:
2)
+ if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
+ {
+ if (pubform->puballtables && pubform->puballsequences)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL
TABLES, ALL SEQUENCES publications."));
+ else if (pubform->puballtables)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+ NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES
publications."));
+ else
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL
SEQUENCES publications."));
+ }

Do you think we can make it as:

if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
{
ereport(ERROR,
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("Schemas cannot be added to or dropped from publication
defined for ALL TABLES, ALL SEQUENCES, or both"));
}

IMO, a generic message such as above is good enough.
Same is applicable to the next 'Tables or sequences' message.

I'm ok with the generic error message, modified accordingly.

patch003:
3)
/*
* Return whether the subscription currently has any relations.
*
* Note: Unlike HasSubscriptionRelations(), this function relies on cached
* information for subscription relations. Additionally, it should not be
* invoked outside of apply or tablesync workers, as MySubscription must be
* initialized first.
*/
bool
HasSubscriptionRelationsCached(void)
{
/* We need up-to-date subscription tables info here */
return FetchRelationStates(NULL);
}

a) The comment mentions old function name HasSubscriptionRelations()
b) I think this function only worries about tables as we are passing
has_pending_sequences as NULL.

So does the comment and function name need amendments from relation to table?

Modified

patch005:
4)
+ * root partitioned tables. This is not applicable for FOR ALL SEQEUNCES
+ * publication.

a) SEQEUNCES --> SEQUENCES

b) We may say (omit FOR):
This is not applicable to ALL SEQUENCES publication.

Modified

5)
* If getting tables and not_ready is false get all tables, otherwise,
* only get tables that have not reached READY state.
* If getting sequences and not_ready is false get all sequences,
* otherwise, only get sequences that have not reached READY state (i.e. are
* still in INIT state).

Shall we rephrase to:
/*
* If getting tables and not_ready is false, retrieve all tables;
* otherwise, retrieve only tables that have not reached the READY state.
*
* If getting sequences and not_ready is false, retrieve all sequences;
* otherwise, retrieve only sequences that are still in the INIT state
* (i.e., have not reached the READY state).
*/

Modified

Thanks for the comments. The attached patches has the changes for the
same. Also Shlok's comments from [1]/messages/by-id/CANhcyEUHS+kjS0AQhVEgLF0Yf0dEZkxczEriN4su5mQqZnxU8g@mail.gmail.com have also been addressed.

[1]: /messages/by-id/CANhcyEUHS+kjS0AQhVEgLF0Yf0dEZkxczEriN4su5mQqZnxU8g@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250917-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250917-0001-Enhance-pg_get_sequence_data-function.patchDownload
From 7596b53b322b9c41561e811aba206d4661e75593 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250917 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out | 15 ++++++++++----
 src/test/regress/sql/sequence.sql      |  5 ++++-
 5 files changed, 58 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..879c62bdccc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 03e82d28c87..9fa55923bb7 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..8eeb60a3378 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -266,6 +266,13 @@ SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM foo_seq_new
           2 | t          | t
 (1 row)
 
+-- pg_get_sequence_data
+SELECT last_value, is_called, log_cnt IN (31, 32) AS log_cnt_ok, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('foo_seq_new');
+ last_value | is_called | log_cnt_ok | lsn 
+------------+-----------+------------+-----
+          2 | t         | t          | t
+(1 row)
+
 DROP SEQUENCE foo_seq_new;
 -- renaming serial sequences
 ALTER TABLE serialtest1_f2_seq RENAME TO serialtest1_f2_foo;
@@ -840,10 +847,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt 
+------------+-----------+---------
+         10 | t         |      32
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..afc1f92407a 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -138,6 +138,9 @@ SELECT nextval('foo_seq_new');
 -- log_cnt can be higher if there is a checkpoint just at the right
 -- time, so just test for the expected range
 SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM foo_seq_new;
+-- pg_get_sequence_data
+SELECT last_value, is_called, log_cnt IN (31, 32) AS log_cnt_ok, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('foo_seq_new');
+
 DROP SEQUENCE foo_seq_new;
 
 -- renaming serial sequences
@@ -414,6 +417,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250917-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250917-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 43c744add68a14dcd50d7cccd7d71c0e027dd112 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250917 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 114 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 577 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 785 insertions(+), 393 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 3de5687461c..a98acfd3b67 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1451,20 +1500,16 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 }
 
 /*
@@ -2018,19 +2063,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9fd48acb1f8..03c0913bf72 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10703,7 +10709,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10723,13 +10734,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10841,6 +10853,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19616,6 +19650,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9fc3671cb35..0e442c28514 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4531,6 +4531,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4561,9 +4562,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4579,6 +4585,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4599,6 +4606,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4650,52 +4659,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index fc5b9b52f80..28794ef85da 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3432,6 +3432,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36b5b2457f9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6b20a4404b2..ec559146640 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3583,11 +3583,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 895ca87a0df..38766b5709e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -69,38 +69,32 @@ ALTER PUBLICATION testpub_foralltables SET (publish = 'insert, update');
 CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop schema from 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables DROP TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't set schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables SET TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_fortable FOR TABLE testpub_tbl1;
 RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +103,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +127,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +148,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +160,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +174,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +201,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +216,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  publication parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  publication parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +330,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +348,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +380,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +396,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +415,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +426,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +462,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +475,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +593,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +888,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1081,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1292,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1335,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1409,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1418,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1431,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1460,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1486,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1557,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1568,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1589,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1601,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1613,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1624,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1635,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1646,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1677,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1689,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1771,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1792,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1926,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f423061395..72e893bfd51 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e90af5b2ad3..8165093a737 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250917-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250917-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 5b229d1da2878c899ad5e4833d5e9a0660da15f4 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 15 Sep 2025 12:08:03 +0530
Subject: [PATCH v20250917 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 196 ++----------------
 src/backend/replication/logical/worker.c      |  22 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  14 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 242 insertions(+), 198 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 31a92d1a24a..d3882b40a39 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e6da4028d39..2ba12517e93 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
@@ -1789,21 +1635,21 @@ AllTablesyncsReady(void)
 }
 
 /*
- * Return whether the subscription currently has any relations.
+ * Return whether the subscription currently has any tables.
  *
- * Note: Unlike HasSubscriptionRelations(), this function relies on cached
- * information for subscription relations. Additionally, it should not be
+ * Note: Unlike HasSubscriptionTables(), this function relies on cached
+ * information for subscription tables. Additionally, it should not be
  * invoked outside of apply or tablesync workers, as MySubscription must be
  * initialized first.
  */
 bool
-HasSubscriptionRelationsCached(void)
+HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 9b5885d57cf..7c437c66339 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1243,7 +1243,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1365,7 +1365,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1421,7 +1421,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1487,7 +1487,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1622,7 +1622,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2464,7 +2464,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4133,7 +4133,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4623,7 +4623,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
 	 * concurrently add tables to the subscription, the apply worker may not
 	 * process invalidations in time. Consequently,
-	 * HasSubscriptionRelationsCached() might miss the new tables, leading to
+	 * HasSubscriptionTablesCached() might miss the new tables, leading to
 	 * premature advancement of oldest_nonremovable_xid.
 	 *
 	 * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
@@ -4637,7 +4637,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * subscription tables at this stage to prevent unnecessary tuple
 	 * retention.
 	 */
-	if (HasSubscriptionRelationsCached() && !AllTablesyncsReady())
+	if (HasSubscriptionTablesCached() && !AllTablesyncsReady())
 	{
 		TimestampTz now;
 
@@ -5878,7 +5878,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 0e442c28514..a1e47781dbe 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5312,7 +5312,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5370,7 +5370,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index de003802612..43d3a835cb2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -272,12 +274,16 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
-extern bool HasSubscriptionRelationsCached(void);
+extern bool HasSubscriptionTablesCached(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8165093a737..8e6913c01a2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2916,7 +2916,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250917-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchtext/x-patch; charset=US-ASCII; name=v20250917-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From 0a607982150dd466612828e2d60eb8103b10ae7a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250917 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 750d262fcca..1413cf5c9cc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1612,8 +1612,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1667,8 +1667,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1692,12 +1692,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1709,8 +1709,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1722,10 +1722,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2322,17 +2322,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2390,13 +2390,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 03c0913bf72..6a5b226c906 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10984,7 +10984,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..68ee5670124 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250917-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250917-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 3716f31aeac2aa286a69a612801bfdd4bdc41741 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 17 Sep 2025 11:38:19 +0530
Subject: [PATCH v20250917 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 329 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 403 insertions(+), 114 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..b98d9ae78a6 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable to FOR ALL SEQUENCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..0f5f6ab8ade 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that are still in the INIT state
+ * (i.e., have not reached the READY state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +584,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +609,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c77fa0234bb..01d300d3cf4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1413cf5c9cc..8c60f7a5011 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
 
-				AddSubscriptionRelState(subid, relid, table_state,
+
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +913,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +926,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +949,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -943,34 +977,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -978,28 +1025,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1069,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1126,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,11 +1143,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1097,6 +1161,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1732,6 +1820,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -2007,7 +2107,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2403,11 +2503,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2593,8 +2697,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2602,15 +2721,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
 	List	   *tablelist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2618,8 +2739,25 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
-	if (server_version >= 160000)
+	/* Get the list of tables and sequences from the publisher. */
+	if (server_version >= 190000)
+	{
+		tableRow[2] = INT2VECTOROID;
+
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
+						 "       FROM pg_class c\n"
+						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
+						 "                FROM pg_publication\n"
+						 "                WHERE pubname IN (%s)) AS gpt\n"
+						 "             ON gpt.relid = c.oid\n"
+						 "      UNION ALL\n"
+						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+						 "       FROM pg_catalog.pg_publication_sequences s\n"
+						 "        WHERE s.pubname IN (%s)",
+						 pub_names->data, pub_names->data);
+	}
+	else if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2648,7 +2786,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2661,7 +2799,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2677,22 +2815,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(tablelist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			tablelist = lappend(tablelist, relinfo);
 
 		ExecClearTuple(slot);
 	}
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 6a5b226c906..6a06044d5fb 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10989,6 +10989,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2ba12517e93..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec559146640..4a638fbecc9 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9fa55923bb7..34409796b85 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12294,6 +12294,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..3d6e31a0d6c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,9 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 68ee5670124..8d8487c2454 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,6 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8e6913c01a2..8620169bdde 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2899,6 +2899,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20250917-0006-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250917-0006-New-worker-for-sequence-synchronization-du.patchDownload
From c1ad923e7de296262116d0f7373dd9857007c15e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 17 Sep 2025 15:22:49 +0530
Subject: [PATCH v20250917 6/7] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |  44 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  60 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 752 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   | 102 +--
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1414 insertions(+), 205 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 0f5f6ab8ade..cf5ebcc3fbf 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 01d300d3cf4..9b6e3647cc4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 879c62bdccc..265ce487c27 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 8c60f7a5011..5c860139933 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1084,7 +1084,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2084,7 +2084,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -2727,7 +2727,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	StringInfoData cmd;
 	TupleTableSlot *slot;
 	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
-	List	   *tablelist = NIL;
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
 	bool		check_relkind = (server_version >= 190000);
@@ -2739,25 +2739,8 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables and sequences from the publisher. */
-	if (server_version >= 190000)
-	{
-		tableRow[2] = INT2VECTOROID;
-
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
-						 "       FROM pg_class c\n"
-						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
-						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
-						 "                FROM pg_publication\n"
-						 "                WHERE pubname IN (%s)) AS gpt\n"
-						 "             ON gpt.relid = c.oid\n"
-						 "      UNION ALL\n"
-						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
-						 "       FROM pg_catalog.pg_publication_sequences s\n"
-						 "        WHERE s.pubname IN (%s)",
-						 pub_names->data, pub_names->data);
-	}
-	else if (server_version >= 160000)
+	/* Get the list of relations from the publisher */
+	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2774,7 +2757,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2782,6 +2765,15 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2833,13 +2825,13 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		relinfo->relkind = relkind;
 
 		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
-			list_member_rangevar(tablelist, relinfo->rv))
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, relinfo);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2847,7 +2839,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c900b6cf3b1..94b035978b9 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1625,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1665,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..5ecce367e1f
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,752 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 12
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficent permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	int64		log_cnt;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	hash_seq_init(&status, sequences_to_copy);
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size >= MAX_SEQUENCES_SYNC_PER_BATCH)
+				break;
+		}
+
+		if (batch_size == 0)
+		{
+			CommitTransactionCommand();
+			break;
+		}
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient pemission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	StartTransactionCommand();
+
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+
+		/* Clean up local catalog to prevent retry */
+		RemoveSubscriptionRel(MySubscription->oid, entry->localrelid);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+								entry->nspname, entry->seqname, MySubscription->name));
+	}
+
+	CommitTransactionCommand();
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequence sync worker sequences",
+									256, &ctl, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key,
+								HASH_ENTER, &found);
+		Assert(seq_entry != NULL);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+	hash_seq_init(&hash_seq, sequences_to_copy);
+	while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+	{
+		pfree(seq_entry->seqname);
+		pfree(seq_entry->nspname);
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..bc0f7988a43 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1645,19 +1615,7 @@ AllTablesyncsReady(void)
 bool
 HasSubscriptionTablesCached(void)
 {
-	bool		started_tx;
-	bool		has_subrels;
-
-	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	return has_subrels;
+	return FetchRelationStates(NULL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 7c437c66339..ec22d9e71f3 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4132,7 +4155,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5577,7 +5603,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5697,8 +5724,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5809,6 +5836,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5828,14 +5859,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5906,6 +5939,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5918,9 +5955,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 6bc6be13d2a..f5a09c0f536 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1907,7 +1907,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 34409796b85..45dd10cd9ee 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5700,9 +5700,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 3d6e31a0d6c..4d20ebbaf4b 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f402b17295c..3a5fa8f8be1 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 43d3a835cb2..252a4228d5b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..de47f39fdbb
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8620169bdde..12f0bd04bcc 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250917-0007-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250917-0007-Documentation-for-sequence-synchronization.patchDownload
From 8b94d10f4d3059e7807c3c02dcf43471880244e2 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:35:21 +0530
Subject: [PATCH v20250917 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 279 +++++++++++++++++++---
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 doc/src/sgml/system-views.sgml            |  66 +++++
 8 files changed, 477 insertions(+), 83 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e9095bedf21..72d597097a3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8186,16 +8186,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8229,7 +8232,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8238,12 +8241,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e9b420f3ddb..7138de1acb8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..9313cbfd1fd 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1049,24 +1053,24 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 <programlisting><![CDATA[
 /* pub # */ \dRp+
                                          Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" WHERE ((a > 5) AND (c = 'NSW'::text))
 
-                                         Publication p2
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p2
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1"
     "public.t2" WHERE (e = 99)
 
-                                         Publication p3
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p3
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t2" WHERE (d = 10)
     "public.t3" WHERE (g = 10)
@@ -1491,10 +1495,10 @@ Publications:
      for each publication.
 <programlisting>
 /* pub # */ \dRp+
-                                         Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p1
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" (id, a, b, d)
 </programlisting></para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..8309ca4b039 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#334vignesh C
vignesh21@gmail.com
In reply to: Shlok Kyal (#332)
7 attachment(s)
Re: Logical Replication of sequences

On Wed, 17 Sept 2025 at 19:08, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

On Mon, 15 Sept 2025 at 14:36, vignesh C <vignesh21@gmail.com> wrote:

On Mon, 8 Sept 2025 at 12:05, Chao Li <li.evan.chao@gmail.com> wrote:

On Sep 8, 2025, at 14:00, vignesh C <vignesh21@gmail.com> wrote:

1 - 0001
```
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..c8adddbfa31 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
SELECT nextval('test_seq1');
-- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');

DROP SEQUENCE test_seq1;
```

As it shows log_cnt now, after calling pg_get_sequence_data(), I suggest add 8 nextval(), so that sequence goes to 11, and log_cnt should become to 22.

Could you please explain the reason you’d like this to be done?

Because log_cnt is newly exposed, we want to verify its value in the test. When I first time ran the test code, I saw initial value of log_cnt was 32, then I thought log_cnt might get decreased if I ran nextval() again, but it didn’t. Only after I ran 10 (cache size) more nextval(), log_cnt got decreased by 10 to 22. The test code is a place for people to look for expected behavior. So I think adding more nextval() to verify and show the change of log_cnt is helpful.

This is addressed in the attached patch, also rebased the patch
because of recent commits.

Hi Vignesh,

Here are some more review comments:

For patch 0006:

1. Spelling mistake in:
+ appendStringInfo(combined_error_detail, "Insufficent permission for
sequence(s): (%s).",
+ insuffperm_seqs->data);

Insufficent -> Insufficient

2. Spelling mistake in:
+ ereport(LOG,
+ errmsg("logical replication sequence synchronization for
subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d
skipped, %d mismatched, %d insufficient pemission, %d missing, ",
+    MySubscription->name, (current_index /
MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+    batch_succeeded_count, batch_skipped_count,
batch_mismatched_count, batch_insuffperm_count,
+    batch_size - (batch_succeeded_count + batch_skipped_count +
batch_mismatched_count + batch_insuffperm_count)));

pemission -> permission

3. I ran the ALTER SUBSCRIPTION .. REFRESH PUBLICATION command and
DROP SEQUENCE command and got a warning for "leaked hash_seq_search
scan". Is it expected?

2025-09-17 19:06:48.663 IST [2995060] LOG: logical replication
sequence synchronization worker for subscription "sub1" has started
2025-09-17 19:06:48.677 IST [2995060] LOG: logical replication
sequence synchronization for subscription "sub1" - total
unsynchronized: 0
2025-09-17 19:06:48.677 IST [2995060] WARNING: leaked hash_seq_search
scan for hash table 0x62b0a61d3450
2025-09-17 19:06:48.677 IST [2995060] LOG: logical replication
sequence synchronization worker for subscription "sub1" has finished

Steps to reproduce:
1. create publication for ALL SEQUENCES and create a subscription for
the publication on another node.
2. create sequence s1 both on publisher and subscriber.
3. Attach gdb on a psql terminal and add breakpoint at "line of
function call to AddSubscriptionRelState" inside function
AlterSubscription_refresh and run ALTER PUBLICATION .. REFRESH
PUBLICATION command on psql terminal
4. DROP sequence s1 on another terminal(for subscriber)
5. Continue the gdb
We will get the above warning.

Thanks for the comments, these are handled in the attached patch.

Regards,
Vignesh

Attachments:

v20250918-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250918-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 3716f31aeac2aa286a69a612801bfdd4bdc41741 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 17 Sep 2025 11:38:19 +0530
Subject: [PATCH v20250918 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 329 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 403 insertions(+), 114 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..b98d9ae78a6 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable to FOR ALL SEQUENCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..0f5f6ab8ade 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,22 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that are still in the INIT state
+ * (i.e., have not reached the READY state).
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +584,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +609,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c77fa0234bb..01d300d3cf4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1413cf5c9cc..8c60f7a5011 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
 
-				AddSubscriptionRelState(subid, relid, table_state,
+
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +913,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +926,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +949,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -943,34 +977,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -978,28 +1025,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1069,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1126,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,11 +1143,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1097,6 +1161,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1732,6 +1820,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -2007,7 +2107,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2403,11 +2503,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2593,8 +2697,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2602,15 +2721,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
 	List	   *tablelist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2618,8 +2739,25 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
-	if (server_version >= 160000)
+	/* Get the list of tables and sequences from the publisher. */
+	if (server_version >= 190000)
+	{
+		tableRow[2] = INT2VECTOROID;
+
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
+						 "       FROM pg_class c\n"
+						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
+						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
+						 "                FROM pg_publication\n"
+						 "                WHERE pubname IN (%s)) AS gpt\n"
+						 "             ON gpt.relid = c.oid\n"
+						 "      UNION ALL\n"
+						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+						 "       FROM pg_catalog.pg_publication_sequences s\n"
+						 "        WHERE s.pubname IN (%s)",
+						 pub_names->data, pub_names->data);
+	}
+	else if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2648,7 +2786,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2661,7 +2799,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2677,22 +2815,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(tablelist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			tablelist = lappend(tablelist, relinfo);
 
 		ExecClearTuple(slot);
 	}
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 6a5b226c906..6a06044d5fb 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10989,6 +10989,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2ba12517e93..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec559146640..4a638fbecc9 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9fa55923bb7..34409796b85 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12294,6 +12294,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..3d6e31a0d6c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,9 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 68ee5670124..8d8487c2454 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,6 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8e6913c01a2..8620169bdde 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2899,6 +2899,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20250918-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250918-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 5b229d1da2878c899ad5e4833d5e9a0660da15f4 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 15 Sep 2025 12:08:03 +0530
Subject: [PATCH v20250918 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 196 ++----------------
 src/backend/replication/logical/worker.c      |  22 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  14 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 242 insertions(+), 198 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 31a92d1a24a..d3882b40a39 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e6da4028d39..2ba12517e93 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
@@ -1789,21 +1635,21 @@ AllTablesyncsReady(void)
 }
 
 /*
- * Return whether the subscription currently has any relations.
+ * Return whether the subscription currently has any tables.
  *
- * Note: Unlike HasSubscriptionRelations(), this function relies on cached
- * information for subscription relations. Additionally, it should not be
+ * Note: Unlike HasSubscriptionTables(), this function relies on cached
+ * information for subscription tables. Additionally, it should not be
  * invoked outside of apply or tablesync workers, as MySubscription must be
  * initialized first.
  */
 bool
-HasSubscriptionRelationsCached(void)
+HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 9b5885d57cf..7c437c66339 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1243,7 +1243,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1365,7 +1365,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1421,7 +1421,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1487,7 +1487,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1622,7 +1622,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2464,7 +2464,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4133,7 +4133,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4623,7 +4623,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
 	 * concurrently add tables to the subscription, the apply worker may not
 	 * process invalidations in time. Consequently,
-	 * HasSubscriptionRelationsCached() might miss the new tables, leading to
+	 * HasSubscriptionTablesCached() might miss the new tables, leading to
 	 * premature advancement of oldest_nonremovable_xid.
 	 *
 	 * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
@@ -4637,7 +4637,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * subscription tables at this stage to prevent unnecessary tuple
 	 * retention.
 	 */
-	if (HasSubscriptionRelationsCached() && !AllTablesyncsReady())
+	if (HasSubscriptionTablesCached() && !AllTablesyncsReady())
 	{
 		TimestampTz now;
 
@@ -5878,7 +5878,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 0e442c28514..a1e47781dbe 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5312,7 +5312,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5370,7 +5370,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index de003802612..43d3a835cb2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -272,12 +274,16 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
-extern bool HasSubscriptionRelationsCached(void);
+extern bool HasSubscriptionTablesCached(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8165093a737..8e6913c01a2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2916,7 +2916,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250918-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchtext/x-patch; charset=US-ASCII; name=v20250918-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From 0a607982150dd466612828e2d60eb8103b10ae7a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250918 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 750d262fcca..1413cf5c9cc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1612,8 +1612,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1667,8 +1667,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1692,12 +1692,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1709,8 +1709,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1722,10 +1722,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2322,17 +2322,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2390,13 +2390,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 03c0913bf72..6a5b226c906 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10984,7 +10984,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 73e505c25b3..68ee5670124 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250918-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250918-0001-Enhance-pg_get_sequence_data-function.patchDownload
From 7596b53b322b9c41561e811aba206d4661e75593 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250918 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out | 15 ++++++++++----
 src/test/regress/sql/sequence.sql      |  5 ++++-
 5 files changed, 58 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..879c62bdccc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 03e82d28c87..9fa55923bb7 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..8eeb60a3378 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -266,6 +266,13 @@ SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM foo_seq_new
           2 | t          | t
 (1 row)
 
+-- pg_get_sequence_data
+SELECT last_value, is_called, log_cnt IN (31, 32) AS log_cnt_ok, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('foo_seq_new');
+ last_value | is_called | log_cnt_ok | lsn 
+------------+-----------+------------+-----
+          2 | t         | t          | t
+(1 row)
+
 DROP SEQUENCE foo_seq_new;
 -- renaming serial sequences
 ALTER TABLE serialtest1_f2_seq RENAME TO serialtest1_f2_foo;
@@ -840,10 +847,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt 
+------------+-----------+---------
+         10 | t         |      32
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..afc1f92407a 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -138,6 +138,9 @@ SELECT nextval('foo_seq_new');
 -- log_cnt can be higher if there is a checkpoint just at the right
 -- time, so just test for the expected range
 SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM foo_seq_new;
+-- pg_get_sequence_data
+SELECT last_value, is_called, log_cnt IN (31, 32) AS log_cnt_ok, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('foo_seq_new');
+
 DROP SEQUENCE foo_seq_new;
 
 -- renaming serial sequences
@@ -414,6 +417,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250918-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250918-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 43c744add68a14dcd50d7cccd7d71c0e027dd112 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250918 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 114 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 577 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 785 insertions(+), 393 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 3de5687461c..a98acfd3b67 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1451,20 +1500,16 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 }
 
 /*
@@ -2018,19 +2063,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9fd48acb1f8..03c0913bf72 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10703,7 +10709,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10723,13 +10734,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10841,6 +10853,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19616,6 +19650,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9fc3671cb35..0e442c28514 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4531,6 +4531,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4561,9 +4562,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4579,6 +4585,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4599,6 +4606,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4650,52 +4659,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index fc5b9b52f80..28794ef85da 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3432,6 +3432,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36b5b2457f9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6b20a4404b2..ec559146640 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3583,11 +3583,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 86a236bd58b..73e505c25b3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 895ca87a0df..38766b5709e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -69,38 +69,32 @@ ALTER PUBLICATION testpub_foralltables SET (publish = 'insert, update');
 CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop schema from 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables DROP TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't set schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables SET TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_fortable FOR TABLE testpub_tbl1;
 RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +103,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +127,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +148,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +160,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +174,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +201,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +216,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  publication parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  publication parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +330,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +348,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +380,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +396,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +415,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +426,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +462,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +475,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +593,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +888,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1081,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1292,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1335,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1409,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1418,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1431,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1460,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1486,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1557,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1568,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1589,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1601,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1613,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1624,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1635,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1646,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1677,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1689,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1771,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1792,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1926,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f423061395..72e893bfd51 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e90af5b2ad3..8165093a737 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250918-0006-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=US-ASCII; name=v20250918-0006-New-worker-for-sequence-synchronization-du.patchDownload
From c5577cbf79ad0496ecd4cfddfd2273908159ea82 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 17 Sep 2025 15:22:49 +0530
Subject: [PATCH v20250918 6/7] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |  44 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  60 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 756 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   | 102 +--
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1418 insertions(+), 205 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 0f5f6ab8ade..cf5ebcc3fbf 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 01d300d3cf4..9b6e3647cc4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 879c62bdccc..265ce487c27 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 8c60f7a5011..5c860139933 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1084,7 +1084,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2084,7 +2084,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
@@ -2727,7 +2727,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	StringInfoData cmd;
 	TupleTableSlot *slot;
 	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
-	List	   *tablelist = NIL;
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
 	bool		check_relkind = (server_version >= 190000);
@@ -2739,25 +2739,8 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables and sequences from the publisher. */
-	if (server_version >= 190000)
-	{
-		tableRow[2] = INT2VECTOROID;
-
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
-						 "       FROM pg_class c\n"
-						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
-						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
-						 "                FROM pg_publication\n"
-						 "                WHERE pubname IN (%s)) AS gpt\n"
-						 "             ON gpt.relid = c.oid\n"
-						 "      UNION ALL\n"
-						 "      SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
-						 "       FROM pg_catalog.pg_publication_sequences s\n"
-						 "        WHERE s.pubname IN (%s)",
-						 pub_names->data, pub_names->data);
-	}
-	else if (server_version >= 160000)
+	/* Get the list of relations from the publisher */
+	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
 
@@ -2774,7 +2757,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2782,6 +2765,15 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2833,13 +2825,13 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 		relinfo->relkind = relkind;
 
 		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
-			list_member_rangevar(tablelist, relinfo->rv))
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, relinfo);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2847,7 +2839,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c900b6cf3b1..94b035978b9 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1625,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1665,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..4a4b2b04e5b
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,756 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker will periodically check if there are any sequences in INIT
+ * state and will start a sequencesync worker if needed.
+ *
+ * The sequencesync worker retrieves the sequences to be synchronized from the
+ * pg_subscription_rel catalog table.  It synchronizes multiple sequences per
+ * single transaction by fetching the sequence value and page LSN from the
+ * remote publisher and updating them in the local subscriber sequence.  After
+ * synchronization, it sets the sequence state to READY.
+ *
+ * So the state progression is always just: INIT -> READY.
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 12
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	int64		log_cnt;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	hash_seq_init(&status, sequences_to_copy);
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size >= MAX_SEQUENCES_SYNC_PER_BATCH)
+				break;
+		}
+
+		if (batch_size == 0)
+		{
+			CommitTransactionCommand();
+			break;
+		}
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	StartTransactionCommand();
+
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+
+		/* Clean up local catalog to prevent retry */
+		RemoveSubscriptionRel(MySubscription->oid, entry->localrelid);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+								entry->nspname, entry->seqname, MySubscription->name));
+	}
+
+	CommitTransactionCommand();
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequence sync worker sequences",
+									256, &ctl, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key,
+								HASH_ENTER, &found);
+		Assert(seq_entry != NULL);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..bc0f7988a43 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1645,19 +1615,7 @@ AllTablesyncsReady(void)
 bool
 HasSubscriptionTablesCached(void)
 {
-	bool		started_tx;
-	bool		has_subrels;
-
-	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	return has_subrels;
+	return FetchRelationStates(NULL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 7c437c66339..ec22d9e71f3 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4132,7 +4155,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5577,7 +5603,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5697,8 +5724,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5809,6 +5836,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5828,14 +5859,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5906,6 +5939,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5918,9 +5955,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 6bc6be13d2a..f5a09c0f536 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1907,7 +1907,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 34409796b85..45dd10cd9ee 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5700,9 +5700,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 3d6e31a0d6c..4d20ebbaf4b 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f402b17295c..3a5fa8f8be1 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 43d3a835cb2..252a4228d5b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..de47f39fdbb
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8620169bdde..12f0bd04bcc 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250918-0007-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250918-0007-Documentation-for-sequence-synchronization.patchDownload
From feb60db4bbc8ac74e3fb508a1fc801424401f797 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:35:21 +0530
Subject: [PATCH v20250918 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 279 +++++++++++++++++++---
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 doc/src/sgml/system-views.sgml            |  66 +++++
 8 files changed, 477 insertions(+), 83 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e9095bedf21..72d597097a3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8186,16 +8186,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8229,7 +8232,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8238,12 +8241,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e9b420f3ddb..7138de1acb8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..9313cbfd1fd 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1049,24 +1053,24 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 <programlisting><![CDATA[
 /* pub # */ \dRp+
                                          Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" WHERE ((a > 5) AND (c = 'NSW'::text))
 
-                                         Publication p2
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p2
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1"
     "public.t2" WHERE (e = 99)
 
-                                         Publication p3
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p3
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t2" WHERE (d = 10)
     "public.t3" WHERE (g = 10)
@@ -1491,10 +1495,10 @@ Publications:
      for each publication.
 <programlisting>
 /* pub # */ \dRp+
-                                         Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p1
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" (id, a, b, d)
 </programlisting></para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..8309ca4b039 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#335shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#334)
Re: Logical Replication of sequences

On Thu, Sep 18, 2025 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, these are handled in the attached patch.

Please find a few comments:

patch005:
1)
GetSubscriptionRelations:
+ /* Skip sequences if they were not requested */
+ if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+ continue;
+
+ /* Skip tables if they were not requested */
+ if (!get_tables && (relkind != RELKIND_SEQUENCE))
+ continue;

The use of negated conditions makes the logic harder to follow,
especially in the second if block.

Can we write it as:
bool is_sequence = (relkind == RELKIND_SEQUENCE);

/* Skip if the relation type is not requested */
if ((get_tables && is_sequence) ||
(get_sequences && !is_sequence))
continue;

Or at-least:
/* Skip sequences if they were not requested */
if (get_tables && (relkind == RELKIND_SEQUENCE))
continue;

/* Skip tables if they were not requested */
if (get_sequences && (relkind != RELKIND_SEQUENCE))
continue;

2)

AlterSubscription_refresh_seq:

+ UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+    InvalidXLogRecPtr, false);

Now it seems we are setting SUBREL_STATE_DATASYNC state as well for
sequences. Earlier it was INIT only.

So we need correction at 2 places:
a)
Comment atop GetSubscriptionRelations() which mentions :
* If getting sequences and not_ready is false, retrieve all sequences;
* otherwise, retrieve only sequences that are still in the INIT state
* (i.e., have not reached the READY state).

We shall change it to that have not reached the READY state.

b)
patch 006's commit message says:
This patch introduces sequence synchronization:
Sequences have 2 states:
- INIT (needs synchronizing)
- READY (is already synchronized)

We shall mention the third state as well.

3)
There is some optimization in fetch_relation_list() in patch006. I
think it should be in patch005 itself where we have4 added new logic
to fetch sequeneces and relkind in patch005.
Or do we need those for patch006 specifically?

patch006:
4)
sequencesync.c:

+ * Sequences to be synchronized by the sequencesync worker will
+ * be added to pg_subscription_rel in INIT state when one of the following
+ * commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES

I think this comment needs change. 'REFRESH PUBLICATION SEQUENCES' is
not doing that anymore.

5)
* So the state progression is always just: INIT -> READY.
I see even SUBREL_STATE_DATASYNC being set now by
AlterSubscription_refresh_seq()

thanks
Shveta

#336shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#335)
Re: Logical Replication of sequences

6)
I tried to test the patch. When sequences are more than the
batch-size, but not enough to make a complete batch of max size, I get
this error:

LOG: logical replication sequence synchronization for subscription
"sub1" - total unsynchronized: 118
LOG: logical replication sequence synchronization for subscription
"sub1" - batch #1 = 100 attempted, 100 succeeded, 0 skipped, 0
mismatched, 0 insufficient pemission, 0 missing,
WARNING: leaked hash_seq_search scan for hash table 0x5fcc78ff8f90
ERROR: no hash_seq_search scan for hash table "Logical replication
sequence sync worker sequences"
LOG: background worker "logical replication sequencesync worker" (PID
137165) exited with exit code 1

This is on creating 118 sequences. It can be easily reproduced by
reducing bath-size (MAX_SEQUENCES_SYNC_PER_BATCH) to say 5 and having
6 sequences in total. It seems we break out of the loop of creating a
batch only if batch_size >= MAX_SEQUENCES_SYNC_PER_BATCH. When we try
to access hash_seq_search() more than the entries present, it gives
the above error.

thanks
Shveta

#337vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#335)
7 attachment(s)
Re: Logical Replication of sequences

On Mon, 22 Sept 2025 at 12:03, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Sep 18, 2025 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, these are handled in the attached patch.

Please find a few comments:

patch005:
1)
GetSubscriptionRelations:
+ /* Skip sequences if they were not requested */
+ if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+ continue;
+
+ /* Skip tables if they were not requested */
+ if (!get_tables && (relkind != RELKIND_SEQUENCE))
+ continue;

The use of negated conditions makes the logic harder to follow,
especially in the second if block.

Can we write it as:
bool is_sequence = (relkind == RELKIND_SEQUENCE);

/* Skip if the relation type is not requested */
if ((get_tables && is_sequence) ||
(get_sequences && !is_sequence))
continue;

Or at-least:
/* Skip sequences if they were not requested */
if (get_tables && (relkind == RELKIND_SEQUENCE))
continue;

/* Skip tables if they were not requested */
if (get_sequences && (relkind != RELKIND_SEQUENCE))
continue;

I felt this would not work. Say we want both sequences and tables,
won't it skip the sequence this way from:
if (get_tables && (relkind == RELKIND_SEQUENCE))
continue;

Rest of the comments were fixed, also the comment from [1]/messages/by-id/CAJpy0uAdD9XtZCE34BJhbvncMgfmMuTS0ZXLP1P=g+wpRC8vqQ@mail.gmail.com is fixed in
the attached patch.
[1]: /messages/by-id/CAJpy0uAdD9XtZCE34BJhbvncMgfmMuTS0ZXLP1P=g+wpRC8vqQ@mail.gmail.com

Regards,
Vignesh

Attachments:

v20250923-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250923-0001-Enhance-pg_get_sequence_data-function.patchDownload
From 8222c6f37ebd5cee8f8daebf6d1d68f7e2f0e4b5 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250923 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out | 15 ++++++++++----
 src/test/regress/sql/sequence.sql      |  5 ++++-
 5 files changed, 58 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..879c62bdccc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 01eba3b5a19..f6c44b188fd 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..8eeb60a3378 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -266,6 +266,13 @@ SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM foo_seq_new
           2 | t          | t
 (1 row)
 
+-- pg_get_sequence_data
+SELECT last_value, is_called, log_cnt IN (31, 32) AS log_cnt_ok, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('foo_seq_new');
+ last_value | is_called | log_cnt_ok | lsn 
+------------+-----------+------------+-----
+          2 | t         | t          | t
+(1 row)
+
 DROP SEQUENCE foo_seq_new;
 -- renaming serial sequences
 ALTER TABLE serialtest1_f2_seq RENAME TO serialtest1_f2_foo;
@@ -840,10 +847,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt 
+------------+-----------+---------
+         10 | t         |      32
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..afc1f92407a 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -138,6 +138,9 @@ SELECT nextval('foo_seq_new');
 -- log_cnt can be higher if there is a checkpoint just at the right
 -- time, so just test for the expected range
 SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM foo_seq_new;
+-- pg_get_sequence_data
+SELECT last_value, is_called, log_cnt IN (31, 32) AS log_cnt_ok, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('foo_seq_new');
+
 DROP SEQUENCE foo_seq_new;
 
 -- renaming serial sequences
@@ -414,6 +417,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250923-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchtext/x-patch; charset=US-ASCII; name=v20250923-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From 78e9c6535a1928682f94ade6c2996454857b31c4 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250923 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 750d262fcca..1413cf5c9cc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1612,8 +1612,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1667,8 +1667,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1692,12 +1692,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1709,8 +1709,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1722,10 +1722,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2322,17 +2322,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2390,13 +2390,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 03c0913bf72..6a5b226c906 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10984,7 +10984,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index afa78cb4f5d..d4b6cc52319 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250923-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250923-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From cd0dff7a5c2d867b12e2b91952d4720d5859f19a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 15 Sep 2025 12:08:03 +0530
Subject: [PATCH v20250923 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 196 ++----------------
 src/backend/replication/logical/worker.c      |  22 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  14 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 242 insertions(+), 198 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 31a92d1a24a..d3882b40a39 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -963,7 +963,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e6da4028d39..2ba12517e93 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
@@ -1789,21 +1635,21 @@ AllTablesyncsReady(void)
 }
 
 /*
- * Return whether the subscription currently has any relations.
+ * Return whether the subscription currently has any tables.
  *
- * Note: Unlike HasSubscriptionRelations(), this function relies on cached
- * information for subscription relations. Additionally, it should not be
+ * Note: Unlike HasSubscriptionTables(), this function relies on cached
+ * information for subscription tables. Additionally, it should not be
  * invoked outside of apply or tablesync workers, as MySubscription must be
  * initialized first.
  */
 bool
-HasSubscriptionRelationsCached(void)
+HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 419e478b4c6..a85aca2dceb 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1243,7 +1243,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1365,7 +1365,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1421,7 +1421,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1487,7 +1487,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1622,7 +1622,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2464,7 +2464,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4133,7 +4133,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4623,7 +4623,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
 	 * concurrently add tables to the subscription, the apply worker may not
 	 * process invalidations in time. Consequently,
-	 * HasSubscriptionRelationsCached() might miss the new tables, leading to
+	 * HasSubscriptionTablesCached() might miss the new tables, leading to
 	 * premature advancement of oldest_nonremovable_xid.
 	 *
 	 * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
@@ -4637,7 +4637,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * subscription tables at this stage to prevent unnecessary tuple
 	 * retention.
 	 */
-	if (HasSubscriptionRelationsCached() && !AllTablesyncsReady())
+	if (HasSubscriptionTablesCached() && !AllTablesyncsReady())
 	{
 		TimestampTz now;
 
@@ -5876,7 +5876,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 0e442c28514..a1e47781dbe 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5312,7 +5312,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5370,7 +5370,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index de003802612..43d3a835cb2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -272,12 +274,16 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
-extern bool HasSubscriptionRelationsCached(void);
+extern bool HasSubscriptionTablesCached(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8165093a737..8e6913c01a2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2916,7 +2916,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250923-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250923-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 2a0aee17154ab0dddfd27b9a15dcde5df330e9b9 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 17 Sep 2025 11:38:19 +0530
Subject: [PATCH v20250923 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  60 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 325 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 396 insertions(+), 116 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..b98d9ae78a6 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable to FOR ALL SEQUENCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..189b22a9f56 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,23 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+			continue;
+
+		/* Skip tables if they were not requested */
+		if (!get_tables && (relkind != RELKIND_SEQUENCE))
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c77fa0234bb..01d300d3cf4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1413cf5c9cc..833b9700763 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 
-				AddSubscriptionRelState(subid, relid, table_state,
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +913,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +926,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +949,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -943,34 +977,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -978,28 +1025,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1069,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1126,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,11 +1143,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1097,6 +1161,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1732,6 +1820,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -2007,7 +2107,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2403,11 +2503,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2593,8 +2697,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2602,15 +2721,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2618,7 +2739,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2636,7 +2757,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2644,11 +2765,20 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2661,7 +2791,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2677,22 +2807,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2700,7 +2839,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 6a5b226c906..6a06044d5fb 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10989,6 +10989,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2ba12517e93..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ec559146640..4a638fbecc9 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2312,7 +2312,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f6c44b188fd..f1930412b8b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12294,6 +12294,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..3d6e31a0d6c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,9 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index d4b6cc52319..6e4d1d49b24 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,6 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8e6913c01a2..8620169bdde 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2899,6 +2899,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20250923-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250923-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 9c957994ab03dbc5320edcd4eb8350ef10903fdd Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250923 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 114 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 577 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 785 insertions(+), 393 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 3de5687461c..a98acfd3b67 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1451,20 +1500,16 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 }
 
 /*
@@ -2018,19 +2063,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9fd48acb1f8..03c0913bf72 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10703,7 +10709,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10723,13 +10734,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10841,6 +10853,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19616,6 +19650,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9fc3671cb35..0e442c28514 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4531,6 +4531,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4561,9 +4562,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4579,6 +4585,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4599,6 +4606,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4650,52 +4659,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index fc5b9b52f80..28794ef85da 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3432,6 +3432,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36b5b2457f9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6b20a4404b2..ec559146640 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3583,11 +3583,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4ed14fc5b78..afa78cb4f5d 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 895ca87a0df..38766b5709e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -69,38 +69,32 @@ ALTER PUBLICATION testpub_foralltables SET (publish = 'insert, update');
 CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop schema from 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables DROP TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't set schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables SET TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_fortable FOR TABLE testpub_tbl1;
 RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +103,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +127,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +148,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +160,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +174,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +201,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +216,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  publication parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  publication parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +330,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +348,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +380,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +396,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +415,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +426,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +462,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +475,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +593,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +888,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1081,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1292,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1335,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1409,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1418,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1431,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1460,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1486,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1557,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1568,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1589,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1601,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1613,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1624,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1635,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1646,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1677,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1689,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1771,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1792,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1926,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f423061395..72e893bfd51 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e90af5b2ad3..8165093a737 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250923-0006-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=UTF-8; name=v20250923-0006-New-worker-for-sequence-synchronization-du.patchDownload
From 69a81fa9f89e22e03d16c61c826cc70793d65f31 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 23 Sep 2025 17:33:53 +0530
Subject: [PATCH v20250923 6/7] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 3 states:
   - INIT (needs synchronizing)
   - DATASYNC (needs re-synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  60 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 767 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   | 102 +--
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1413 insertions(+), 181 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 189b22a9f56..46da31731c6 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 01d300d3cf4..9b6e3647cc4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 879c62bdccc..265ce487c27 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 833b9700763..5c860139933 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1084,7 +1084,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2084,7 +2084,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index c900b6cf3b1..94b035978b9 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1625,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1665,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..20cdefed41b
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,767 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state DATASYNC, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT or DATASYNC state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT / DATASYNC → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 12
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	int64		log_cnt;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size >= MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		if (batch_size == 0)
+		{
+			CommitTransactionCommand();
+			break;
+		}
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	StartTransactionCommand();
+
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+
+		/* Clean up local catalog to prevent retry */
+		RemoveSubscriptionRel(MySubscription->oid, entry->localrelid);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+								entry->nspname, entry->seqname, MySubscription->name));
+	}
+
+	CommitTransactionCommand();
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequence sync worker sequences",
+									256, &ctl, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key,
+								HASH_ENTER, &found);
+		Assert(seq_entry != NULL);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..bc0f7988a43 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1645,19 +1615,7 @@ AllTablesyncsReady(void)
 bool
 HasSubscriptionTablesCached(void)
 {
-	bool		started_tx;
-	bool		has_subrels;
-
-	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	return has_subrels;
+	return FetchRelationStates(NULL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index a85aca2dceb..826e021d3f3 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4132,7 +4155,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5575,7 +5601,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5695,8 +5722,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5807,6 +5834,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5826,14 +5857,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5904,6 +5937,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5916,9 +5953,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 6bc6be13d2a..f5a09c0f536 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1907,7 +1907,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f1930412b8b..eb4f508fe7a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5700,9 +5700,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 3d6e31a0d6c..4d20ebbaf4b 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f402b17295c..3a5fa8f8be1 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -15,6 +15,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -107,6 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -413,6 +415,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -763,7 +766,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 43d3a835cb2..252a4228d5b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..de47f39fdbb
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8620169bdde..12f0bd04bcc 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250923-0007-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250923-0007-Documentation-for-sequence-synchronization.patchDownload
From 44283ba4c758e0b987a33c0f494a68cfc889cd46 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:35:21 +0530
Subject: [PATCH v20250923 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 279 +++++++++++++++++++---
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 doc/src/sgml/system-views.sgml            |  66 +++++
 8 files changed, 477 insertions(+), 83 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e9095bedf21..72d597097a3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8186,16 +8186,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8229,7 +8232,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8238,12 +8241,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e9b420f3ddb..7138de1acb8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..9313cbfd1fd 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1049,24 +1053,24 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 <programlisting><![CDATA[
 /* pub # */ \dRp+
                                          Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" WHERE ((a > 5) AND (c = 'NSW'::text))
 
-                                         Publication p2
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p2
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1"
     "public.t2" WHERE (e = 99)
 
-                                         Publication p3
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p3
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t2" WHERE (d = 10)
     "public.t3" WHERE (g = 10)
@@ -1491,10 +1495,10 @@ Publications:
      for each publication.
 <programlisting>
 /* pub # */ \dRp+
-                                         Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p1
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" (id, a, b, d)
 </programlisting></para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..8309ca4b039 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#338shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#337)
Re: Logical Replication of sequences

On Tue, Sep 23, 2025 at 6:39 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 22 Sept 2025 at 12:03, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Sep 18, 2025 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, these are handled in the attached patch.

Please find a few comments:

patch005:
1)
GetSubscriptionRelations:
+ /* Skip sequences if they were not requested */
+ if (!get_sequences && (relkind == RELKIND_SEQUENCE))
+ continue;
+
+ /* Skip tables if they were not requested */
+ if (!get_tables && (relkind != RELKIND_SEQUENCE))
+ continue;

The use of negated conditions makes the logic harder to follow,
especially in the second if block.

Can we write it as:
bool is_sequence = (relkind == RELKIND_SEQUENCE);

/* Skip if the relation type is not requested */
if ((get_tables && is_sequence) ||
(get_sequences && !is_sequence))
continue;

Or at-least:
/* Skip sequences if they were not requested */
if (get_tables && (relkind == RELKIND_SEQUENCE))
continue;

/* Skip tables if they were not requested */
if (get_sequences && (relkind != RELKIND_SEQUENCE))
continue;

I felt this would not work. Say we want both sequences and tables,
won't it skip the sequence this way from:
if (get_tables && (relkind == RELKIND_SEQUENCE))
continue;

Okay, I see your point. In that case, could we reverse the conditions
instead? That seems like the more obvious choice in terms of
readability :
if ((relkind == RELKIND_SEQUENCE && !get_sequences))
continue;

if ((relkind != RELKIND_SEQUENCE && !get_tables))
continue;

For second if-block, more understandable conditions will be:
(relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
&& !get_tables

But here we will have to check relkind twice, so I leave the decision to you.

~~

Few comments on latest patch:

LogicalRepSyncSequences():
1)
+ seq_entry = hash_search(sequences_to_copy, &key,
+ HASH_ENTER, &found);
+ Assert(seq_entry != NULL);

Since we are using HASH_ENTER, it will be good to add Assert(!found) as well.

2)
+ sequences_to_copy = hash_create("Logical replication sequence sync
worker sequences",
+ 256, &ctl, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+

The name of the hash-table looks odd. Shall it simply be 'Logical
replication sequences'. 'Logical Replication' is good enough to give
an indication that these are sequences being synchronized by seq-sync
worker.

~~

copy_sequences():
3)
+ if (batch_size >= MAX_SEQUENCES_SYNC_PER_BATCH ||
+ (current_index + batch_size == total_seqs))
+ break;

In the first condition, I think a better comparison will be equality
one (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH). We are not letting
batch_size go beyond MAX_SEQUENCES_SYNC_PER_BATCH.

4)
+ if (batch_size == 0)
+ {
+ CommitTransactionCommand();
+ break;
+ }

I could not think of a scenario when this will be hit. The outer loop
condition has 'while (current_index < total_seqs)', so batch_size has
to be something. Even if we skip some entries due to
remote_seq_queried being set to true already, that should not make
batch_size as 0 as the entries with remote_seq_queried=true are
already accounted for in current_index. Or am I missing something
here?

~~

sequencesync_list_invalidate_cb():
5)

+ /* invalidate all entries */
+ hash_seq_init(&status, sequences_to_copy);
+ while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+ entry->entry_valid = false;

Can you please elaborate when this case can be hit? I see such logic
in all such invalidation functions registered with
CacheRegisterRelcacheCallback(), but could not find any relevant
comment.

~~

Reviewing further.

thanks
Shveta

#339vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#338)
7 attachment(s)
Re: Logical Replication of sequences

On Thu, 25 Sept 2025 at 12:23, shveta malik <shveta.malik@gmail.com> wrote:

sequencesync_list_invalidate_cb():
5)

+ /* invalidate all entries */
+ hash_seq_init(&status, sequences_to_copy);
+ while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+ entry->entry_valid = false;

Can you please elaborate when this case can be hit? I see such logic
in all such invalidation functions registered with
CacheRegisterRelcacheCallback(), but could not find any relevant
comment.

I noticed this could happen in cases like:
create publication for all tables;
alter publication on many relations;

but there might be more apart from this

Rest of the comments were addressed.
The attached patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20250926-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250926-0001-Enhance-pg_get_sequence_data-function.patchDownload
From c0c97b1c0a648eb6d0179d8d84e03f6e3d19d201 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250926 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out | 15 ++++++++++----
 src/test/regress/sql/sequence.sql      |  5 ++++-
 5 files changed, 58 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..879c62bdccc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 01eba3b5a19..f6c44b188fd 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..8eeb60a3378 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -266,6 +266,13 @@ SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM foo_seq_new
           2 | t          | t
 (1 row)
 
+-- pg_get_sequence_data
+SELECT last_value, is_called, log_cnt IN (31, 32) AS log_cnt_ok, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('foo_seq_new');
+ last_value | is_called | log_cnt_ok | lsn 
+------------+-----------+------------+-----
+          2 | t         | t          | t
+(1 row)
+
 DROP SEQUENCE foo_seq_new;
 -- renaming serial sequences
 ALTER TABLE serialtest1_f2_seq RENAME TO serialtest1_f2_foo;
@@ -840,10 +847,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt 
+------------+-----------+---------
+         10 | t         |      32
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..afc1f92407a 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -138,6 +138,9 @@ SELECT nextval('foo_seq_new');
 -- log_cnt can be higher if there is a checkpoint just at the right
 -- time, so just test for the expected range
 SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM foo_seq_new;
+-- pg_get_sequence_data
+SELECT last_value, is_called, log_cnt IN (31, 32) AS log_cnt_ok, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('foo_seq_new');
+
 DROP SEQUENCE foo_seq_new;
 
 -- renaming serial sequences
@@ -414,6 +417,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250926-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchtext/x-patch; charset=US-ASCII; name=v20250926-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From c7ff5a9dbb2c841bbc2e9682f7157b6183b353c9 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250926 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 750d262fcca..1413cf5c9cc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1612,8 +1612,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1667,8 +1667,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1692,12 +1692,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1709,8 +1709,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1722,10 +1722,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2322,17 +2322,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2390,13 +2390,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 03c0913bf72..6a5b226c906 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10984,7 +10984,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index afa78cb4f5d..d4b6cc52319 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4359,7 +4359,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250926-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250926-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 81b8635a839d2e6eca2a490b65f69353ed779bce Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250926 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 114 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 577 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 785 insertions(+), 393 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 3de5687461c..a98acfd3b67 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1451,20 +1500,16 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 }
 
 /*
@@ -2018,19 +2063,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9fd48acb1f8..03c0913bf72 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -201,6 +201,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -259,6 +263,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -446,7 +451,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -584,6 +589,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10703,7 +10709,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10723,13 +10734,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10841,6 +10853,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19616,6 +19650,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9fc3671cb35..0e442c28514 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4531,6 +4531,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4561,9 +4562,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4579,6 +4585,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4599,6 +4606,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4650,52 +4659,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index fc5b9b52f80..28794ef85da 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3432,6 +3432,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36b5b2457f9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6176741d20b..64bfd309c9a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3585,11 +3585,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4ed14fc5b78..afa78cb4f5d 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4291,6 +4291,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4298,6 +4314,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 895ca87a0df..38766b5709e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -69,38 +69,32 @@ ALTER PUBLICATION testpub_foralltables SET (publish = 'insert, update');
 CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop schema from 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables DROP TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't set schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables SET TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_fortable FOR TABLE testpub_tbl1;
 RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +103,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +127,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +148,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +160,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +174,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +201,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +216,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  publication parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  publication parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +330,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +348,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +380,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +396,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +415,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +426,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +462,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +475,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +593,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +888,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1081,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1292,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1335,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1409,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1418,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1431,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1460,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1486,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1557,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1568,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1589,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1601,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1613,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1624,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1635,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1646,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1677,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1689,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1771,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1792,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1926,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f423061395..72e893bfd51 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 3c80d49b67e..84942daac84 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250926-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250926-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 756d358b02b25ec4c05db205434b95b2c76beb1e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 15 Sep 2025 12:08:03 +0530
Subject: [PATCH v20250926 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 196 ++----------------
 src/backend/replication/logical/worker.c      |  22 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  14 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 242 insertions(+), 198 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 33b7ec7f029..d27f6274188 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -970,7 +970,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e6da4028d39..2ba12517e93 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
@@ -1789,21 +1635,21 @@ AllTablesyncsReady(void)
 }
 
 /*
- * Return whether the subscription currently has any relations.
+ * Return whether the subscription currently has any tables.
  *
- * Note: Unlike HasSubscriptionRelations(), this function relies on cached
- * information for subscription relations. Additionally, it should not be
+ * Note: Unlike HasSubscriptionTables(), this function relies on cached
+ * information for subscription tables. Additionally, it should not be
  * invoked outside of apply or tablesync workers, as MySubscription must be
  * initialized first.
  */
 bool
-HasSubscriptionRelationsCached(void)
+HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 419e478b4c6..a85aca2dceb 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1243,7 +1243,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1365,7 +1365,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1421,7 +1421,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1487,7 +1487,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1622,7 +1622,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2464,7 +2464,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4133,7 +4133,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4623,7 +4623,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
 	 * concurrently add tables to the subscription, the apply worker may not
 	 * process invalidations in time. Consequently,
-	 * HasSubscriptionRelationsCached() might miss the new tables, leading to
+	 * HasSubscriptionTablesCached() might miss the new tables, leading to
 	 * premature advancement of oldest_nonremovable_xid.
 	 *
 	 * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
@@ -4637,7 +4637,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * subscription tables at this stage to prevent unnecessary tuple
 	 * retention.
 	 */
-	if (HasSubscriptionRelationsCached() && !AllTablesyncsReady())
+	if (HasSubscriptionTablesCached() && !AllTablesyncsReady())
 	{
 		TimestampTz now;
 
@@ -5876,7 +5876,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 0e442c28514..a1e47781dbe 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5312,7 +5312,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5370,7 +5370,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index de003802612..43d3a835cb2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -272,12 +274,16 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
-extern bool HasSubscriptionRelationsCached(void);
+extern bool HasSubscriptionTablesCached(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 84942daac84..e1b5fcca659 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2917,7 +2917,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20250926-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250926-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From 2c78a5c4fcdec31f60b53b832a46163a5aee74a0 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 17 Sep 2025 11:38:19 +0530
Subject: [PATCH v20250926 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 325 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 397 insertions(+), 116 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..b98d9ae78a6 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable to FOR ALL SEQUENCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..94156513ddf 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
+			&& !get_tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c77fa0234bb..01d300d3cf4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1413cf5c9cc..833b9700763 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 
-				AddSubscriptionRelState(subid, relid, table_state,
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +913,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +926,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +949,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -943,34 +977,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -978,28 +1025,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1069,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1126,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,11 +1143,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1097,6 +1161,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1732,6 +1820,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -2007,7 +2107,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2403,11 +2503,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2593,8 +2697,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2602,15 +2721,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2618,7 +2739,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2636,7 +2757,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2644,11 +2765,20 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2661,7 +2791,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2677,22 +2807,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2700,7 +2839,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 6a5b226c906..6a06044d5fb 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10989,6 +10989,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2ba12517e93..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 64bfd309c9a..b93f470595c 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2322,7 +2322,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f6c44b188fd..f1930412b8b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12294,6 +12294,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..3d6e31a0d6c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,9 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index d4b6cc52319..6e4d1d49b24 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,6 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e1b5fcca659..fe20d09764f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2900,6 +2900,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20250926-0006-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=UTF-8; name=v20250926-0006-New-worker-for-sequence-synchronization-du.patchDownload
From 6da983cbbb33fdf65e775e371e8a342cd54af5a5 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 23 Sep 2025 17:33:53 +0530
Subject: [PATCH v20250926 6/7] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 3 states:
   - INIT (needs synchronizing)
   - DATASYNC (needs re-synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  60 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 760 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   | 102 +--
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1406 insertions(+), 181 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 94156513ddf..0a9ab03ca87 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 01d300d3cf4..9b6e3647cc4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 879c62bdccc..265ce487c27 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 833b9700763..5c860139933 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1084,7 +1084,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2084,7 +2084,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..99e6f566459 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1625,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1665,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..6ef3ab827f8
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,760 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state DATASYNC, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT or DATASYNC state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT / DATASYNC → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 12
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	int64		log_cnt;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	StartTransactionCommand();
+
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+
+		/* Clean up local catalog to prevent retry */
+		RemoveSubscriptionRel(MySubscription->oid, entry->localrelid);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+								entry->nspname, entry->seqname, MySubscription->name));
+	}
+
+	CommitTransactionCommand();
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..bc0f7988a43 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1645,19 +1615,7 @@ AllTablesyncsReady(void)
 bool
 HasSubscriptionTablesCached(void)
 {
-	bool		started_tx;
-	bool		has_subrels;
-
-	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	return has_subrels;
+	return FetchRelationStates(NULL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index a85aca2dceb..826e021d3f3 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4132,7 +4155,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5575,7 +5601,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5695,8 +5722,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5807,6 +5834,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5826,14 +5857,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5904,6 +5937,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5916,9 +5953,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 6bc6be13d2a..f5a09c0f536 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1907,7 +1907,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f1930412b8b..eb4f508fe7a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5700,9 +5700,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 3d6e31a0d6c..4d20ebbaf4b 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index e4a59a30b8c..8a2a2a16ceb 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -414,6 +416,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -764,7 +767,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 43d3a835cb2..252a4228d5b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..de47f39fdbb
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index fe20d09764f..8075121c446 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250926-0007-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250926-0007-Documentation-for-sequence-synchronization.patchDownload
From 988793f611e1d29105fbdbcbc339a06fe86feb4b Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:35:21 +0530
Subject: [PATCH v20250926 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 279 +++++++++++++++++++---
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 doc/src/sgml/system-views.sgml            |  66 +++++
 8 files changed, 477 insertions(+), 83 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e9095bedf21..72d597097a3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8186,16 +8186,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8229,7 +8232,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8238,12 +8241,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e9b420f3ddb..7138de1acb8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..9313cbfd1fd 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1049,24 +1053,24 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 <programlisting><![CDATA[
 /* pub # */ \dRp+
                                          Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" WHERE ((a > 5) AND (c = 'NSW'::text))
 
-                                         Publication p2
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p2
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1"
     "public.t2" WHERE (e = 99)
 
-                                         Publication p3
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p3
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t2" WHERE (d = 10)
     "public.t3" WHERE (g = 10)
@@ -1491,10 +1495,10 @@ Publications:
      for each publication.
 <programlisting>
 /* pub # */ \dRp+
-                                         Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p1
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" (id, a, b, d)
 </programlisting></para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..8309ca4b039 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#340shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#339)
Re: Logical Replication of sequences

On Fri, Sep 26, 2025 at 12:55 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 25 Sept 2025 at 12:23, shveta malik <shveta.malik@gmail.com> wrote:

sequencesync_list_invalidate_cb():
5)

+ /* invalidate all entries */
+ hash_seq_init(&status, sequences_to_copy);
+ while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+ entry->entry_valid = false;

Can you please elaborate when this case can be hit? I see such logic
in all such invalidation functions registered with
CacheRegisterRelcacheCallback(), but could not find any relevant
comment.

I noticed this could happen in cases like:
create publication for all tables;
alter publication on many relations;

but there might be more apart from this

Okay. I will review more here.

Rest of the comments were addressed.
The attached patch has the changes for the same.

Thanks.

I found a race condition between the apply worker and the sequence
sync worker, where a sequence might be deleted from
pg_subscription_rel and fail to be re-added when it should be.

Steps:

1)
The publisher and subscriber both have two sequences: seq1 and seq2.

2)
A REFRESH PUBLICATION SEQUENCES command is executed on the subscriber.
Before the sequencesync worker on the subscriber can locate the
corresponding sequence on the publisher, the sequence gets dropped on
the publisher. In other words, the sequence is removed from the
publisher before walrcv_exec() is called in copy_sequences().

3)
Before the sequencesync worker on the subscriber can drop the sequence
locally, it is recreated on the publisher. Then, a second REFRESH
PUBLICATION SEQUENCES command is executed on the subscriber. (i.e.,
before RemoveSubscriptionRel() is reached in copy_sequences(), the
sequence is already recreated on the publisher and a new refresh
command is issued on the subscriber.)

4)
During this second REFRESH PUBLICATION SEQUENCES, the sequence is
found to already exist in pg_subscription_rel, so it is not re-added.
However, concurrently, the sequencesync worker from the first refresh
proceeds and drops the sequence from the subscriber.

As a result, the sequence ends up being removed from
pg_subscription_rel, even though it should have remained after both
REFRESH PUBLICATION SEQUENCES commands.

thanks
Shveta

#341vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#340)
7 attachment(s)
Re: Logical Replication of sequences

On Mon, 29 Sept 2025 at 09:59, shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Sep 26, 2025 at 12:55 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 25 Sept 2025 at 12:23, shveta malik <shveta.malik@gmail.com> wrote:

sequencesync_list_invalidate_cb():
5)

+ /* invalidate all entries */
+ hash_seq_init(&status, sequences_to_copy);
+ while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+ entry->entry_valid = false;

Can you please elaborate when this case can be hit? I see such logic
in all such invalidation functions registered with
CacheRegisterRelcacheCallback(), but could not find any relevant
comment.

I noticed this could happen in cases like:
create publication for all tables;
alter publication on many relations;

but there might be more apart from this

Okay. I will review more here.

Rest of the comments were addressed.
The attached patch has the changes for the same.

Thanks.

I found a race condition between the apply worker and the sequence
sync worker, where a sequence might be deleted from
pg_subscription_rel and fail to be re-added when it should be.

Steps:

1)
The publisher and subscriber both have two sequences: seq1 and seq2.

2)
A REFRESH PUBLICATION SEQUENCES command is executed on the subscriber.
Before the sequencesync worker on the subscriber can locate the
corresponding sequence on the publisher, the sequence gets dropped on
the publisher. In other words, the sequence is removed from the
publisher before walrcv_exec() is called in copy_sequences().

3)
Before the sequencesync worker on the subscriber can drop the sequence
locally, it is recreated on the publisher. Then, a second REFRESH
PUBLICATION SEQUENCES command is executed on the subscriber. (i.e.,
before RemoveSubscriptionRel() is reached in copy_sequences(), the
sequence is already recreated on the publisher and a new refresh
command is issued on the subscriber.)

4)
During this second REFRESH PUBLICATION SEQUENCES, the sequence is
found to already exist in pg_subscription_rel, so it is not re-added.
However, concurrently, the sequencesync worker from the first refresh
proceeds and drops the sequence from the subscriber.

As a result, the sequence ends up being removed from
pg_subscription_rel, even though it should have remained after both
REFRESH PUBLICATION SEQUENCES commands.

I've resolved it by modifying the sequence sync worker to no longer
remove sequences from pg_subscription_rel, aligning its behavior with
that of the tablesync worker. This change ensures consistency and also
addresses the reported problem. The attached patch includes the
necessary modifications.

Regards,
Vignesh

Attachments:

v20250930-0006-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=UTF-8; name=v20250930-0006-New-worker-for-sequence-synchronization-du.patchDownload
From 286019b9e657e2fb524240d49ca3ad70d8b9ed10 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 23 Sep 2025 17:33:53 +0530
Subject: [PATCH v20250930 6/7] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 3 states:
   - INIT (needs synchronizing)
   - DATASYNC (needs re-synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  29 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  60 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 750 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   | 102 +--
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   6 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1396 insertions(+), 181 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 94156513ddf..0a9ab03ca87 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 01d300d3cf4..9b6e3647cc4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1408,6 +1408,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 879c62bdccc..265ce487c27 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,9 +953,12 @@ lastval(PG_FUNCTION_ARGS)
  * restore the state of a sequence exactly during data-only restores -
  * it is the only way to clear the is_called flag in an existing
  * sequence.
+ *
+ * log_cnt is currently used only by the sequence syncworker to set the
+ * log_cnt for sequences while synchronizing values from the publisher.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1007,7 +1009,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 						minv, maxv)));
 
 	/* Set the currval() state only if iscalled = true */
-	if (iscalled)
+	if (is_called)
 	{
 		elm->last = next;		/* last returned number */
 		elm->last_valid = true;
@@ -1024,8 +1026,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
 	START_CRIT_SECTION();
 
 	seq->last_value = next;		/* last fetched number */
-	seq->is_called = iscalled;
-	seq->log_cnt = 0;
+	seq->is_called = is_called;
+	seq->log_cnt = log_cnt;
 
 	MarkBufferDirty(buf);
 
@@ -1057,7 +1059,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1067,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1083,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, SEQ_LOG_CNT_INVALID, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1847,6 +1849,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
 		values[2] = Int64GetDatum(seq->log_cnt);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 833b9700763..5c860139933 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1084,7 +1084,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2084,7 +2084,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..99e6f566459 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1625,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1665,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..de663efa980
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,750 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state DATASYNC, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT or DATASYNC state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT / DATASYNC → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 12
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	int64		log_cnt;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	log_cnt = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, log_cnt, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, INT8OID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+	}
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..bc0f7988a43 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1645,19 +1615,7 @@ AllTablesyncsReady(void)
 bool
 HasSubscriptionTablesCached(void)
 {
-	bool		started_tx;
-	bool		has_subrels;
-
-	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	return has_subrels;
+	return FetchRelationStates(NULL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index a85aca2dceb..826e021d3f3 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4132,7 +4155,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5575,7 +5601,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5695,8 +5722,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5807,6 +5834,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5826,14 +5857,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5904,6 +5937,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5916,9 +5953,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index c756c2bebaa..b6f375cf855 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2171,7 +2171,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2189,25 +2189,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2224,6 +2226,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 6bc6be13d2a..f5a09c0f536 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1907,7 +1907,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f1930412b8b..eb4f508fe7a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5700,9 +5700,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 3d6e31a0d6c..4d20ebbaf4b 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..3aec610028f 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, int64 log_cnt, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index e4a59a30b8c..8a2a2a16ceb 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -414,6 +416,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -764,7 +767,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 43d3a835cb2..252a4228d5b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 4e2d6b693c6..3a73413738e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2188,6 +2188,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2199,7 +2200,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..de47f39fdbb
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|32|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|31|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 7ce4bb40c05..4dc796d7409 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20250930-0007-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20250930-0007-Documentation-for-sequence-synchronization.patchDownload
From b45a9b4d788f82b542ecdb32557ae03303f33ffe Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:35:21 +0530
Subject: [PATCH v20250930 7/7] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/logical-replication.sgml     | 279 +++++++++++++++++++---
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 doc/src/sgml/system-views.sgml            |  66 +++++
 8 files changed, 477 insertions(+), 83 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e9095bedf21..72d597097a3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8186,16 +8186,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8229,7 +8232,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8238,12 +8241,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e9b420f3ddb..7138de1acb8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..9313cbfd1fd 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1049,24 +1053,24 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 <programlisting><![CDATA[
 /* pub # */ \dRp+
                                          Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" WHERE ((a > 5) AND (c = 'NSW'::text))
 
-                                         Publication p2
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p2
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1"
     "public.t2" WHERE (e = 99)
 
-                                         Publication p3
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p3
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t2" WHERE (d = 10)
     "public.t3" WHERE (g = 10)
@@ -1491,10 +1495,10 @@ Publications:
      for each publication.
 <programlisting>
 /* pub # */ \dRp+
-                                         Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p1
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" (id, a, b, d)
 </programlisting></para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 3f4a27a736e..fd4bb09f896 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..8309ca4b039 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

v20250930-0001-Enhance-pg_get_sequence_data-function.patchtext/x-patch; charset=UTF-8; name=v20250930-0001-Enhance-pg_get_sequence_data-function.patchDownload
From 5c4d2a20192cf26ceec975d448b5c5a94a7a3366 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 10:23:31 +0530
Subject: [PATCH v20250930 1/7] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
log_cnt and associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/func/func-sequence.sgml   | 27 ++++++++++++++++++++++++++
 src/backend/commands/sequence.c        | 16 ++++++++++++---
 src/include/catalog/pg_proc.dat        |  6 +++---
 src/test/regress/expected/sequence.out | 15 ++++++++++----
 src/test/regress/sql/sequence.sql      |  5 ++++-
 5 files changed, 58 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..649f1522bb2 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+        <parameter>log_cnt</parameter> <type>bigint</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, <literal>log_cnt</literal> shows how many fetches remain before a
+        new WAL record must be written, and <literal>page_lsn</literal> is the
+        LSN corresponding to the most recent WAL record that modified this
+        sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..879c62bdccc 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,15 +1796,16 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will
+ * also be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	4
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1820,10 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "log_cnt",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 4, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1839,15 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = Int64GetDatum(seq->log_cnt);
+		values[3] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 01eba3b5a19..f6c44b188fd 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,11 +3433,11 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,int8,pg_lsn}', proargmodes => '{i,o,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,log_cnt,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..8eeb60a3378 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -266,6 +266,13 @@ SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM foo_seq_new
           2 | t          | t
 (1 row)
 
+-- pg_get_sequence_data
+SELECT last_value, is_called, log_cnt IN (31, 32) AS log_cnt_ok, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('foo_seq_new');
+ last_value | is_called | log_cnt_ok | lsn 
+------------+-----------+------------+-----
+          2 | t         | t          | t
+(1 row)
+
 DROP SEQUENCE foo_seq_new;
 -- renaming serial sequences
 ALTER TABLE serialtest1_f2_seq RENAME TO serialtest1_f2_foo;
@@ -840,10 +847,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | log_cnt 
+------------+-----------+---------
+         10 | t         |      32
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..afc1f92407a 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -138,6 +138,9 @@ SELECT nextval('foo_seq_new');
 -- log_cnt can be higher if there is a checkpoint just at the right
 -- time, so just test for the expected range
 SELECT last_value, log_cnt IN (31, 32) AS log_cnt_ok, is_called FROM foo_seq_new;
+-- pg_get_sequence_data
+SELECT last_value, is_called, log_cnt IN (31, 32) AS log_cnt_ok, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('foo_seq_new');
+
 DROP SEQUENCE foo_seq_new;
 
 -- renaming serial sequences
@@ -414,6 +417,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, log_cnt FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

v20250930-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchtext/x-patch; charset=US-ASCII; name=v20250930-0005-Introduce-REFRESH-PUBLICATION-SEQUENCES-fo.patchDownload
From f3fa957b86fe8970689dd75c3aae46a2126f20f9 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 17 Sep 2025 11:38:19 +0530
Subject: [PATCH v20250930 5/7] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 325 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 397 insertions(+), 116 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..b98d9ae78a6 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable to FOR ALL SEQUENCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..94156513ddf 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
+			&& !get_tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c77fa0234bb..01d300d3cf4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1413cf5c9cc..833b9700763 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 
-				AddSubscriptionRelState(subid, relid, table_state,
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +913,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +926,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +949,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -943,34 +977,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -978,28 +1025,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1069,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1126,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,11 +1143,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1097,6 +1161,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1732,6 +1820,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -2007,7 +2107,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2403,11 +2503,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2593,8 +2697,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2602,15 +2721,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2618,7 +2739,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2636,7 +2757,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2644,11 +2765,20 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2661,7 +2791,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2677,22 +2807,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2700,7 +2839,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index d0d72960208..7fdcd01429a 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10990,6 +10990,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2ba12517e93..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 80540c017bd..d708f3b0266 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 64bfd309c9a..b93f470595c 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2322,7 +2322,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f6c44b188fd..f1930412b8b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12294,6 +12294,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..3d6e31a0d6c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,9 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index f921cb78d06..b6cd827b70d 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4361,6 +4361,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 35e8aad7701..4e2d6b693c6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index dc99941209c..7ce4bb40c05 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2902,6 +2902,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20250930-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchtext/x-patch; charset=US-ASCII; name=v20250930-0004-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From 229608cc198af4ace8a84ab833aa25550b9e1fef Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20250930 4/7] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 750d262fcca..1413cf5c9cc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1612,8 +1612,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1667,8 +1667,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1692,12 +1692,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1709,8 +1709,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1722,10 +1722,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2322,17 +2322,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2390,13 +2390,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index ffa2f8047b3..d0d72960208 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10985,7 +10985,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index f29a2b404e0..f921cb78d06 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4360,7 +4360,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20250930-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchtext/x-patch; charset=US-ASCII; name=v20250930-0002-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 9943987754788d3f705826c62d04f9f51f3b4659 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20250930 2/7] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 114 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 577 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 785 insertions(+), 393 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index f4fc17acbe1..57150fed6c6 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1451,20 +1500,16 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 }
 
 /*
@@ -2014,19 +2059,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index f1def67ac7c..ffa2f8047b3 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -447,7 +452,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10704,7 +10710,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10724,13 +10735,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10842,6 +10854,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19630,6 +19664,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9fc3671cb35..0e442c28514 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4531,6 +4531,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4561,9 +4562,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4579,6 +4585,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4599,6 +4606,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4650,52 +4659,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index fc5b9b52f80..28794ef85da 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3432,6 +3432,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36b5b2457f9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6176741d20b..64bfd309c9a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3585,11 +3585,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index ac0e02a1db7..f29a2b404e0 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4293,6 +4293,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4300,6 +4316,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 895ca87a0df..38766b5709e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -69,38 +69,32 @@ ALTER PUBLICATION testpub_foralltables SET (publish = 'insert, update');
 CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop schema from 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables DROP TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't set schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables SET TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_fortable FOR TABLE testpub_tbl1;
 RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +103,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +127,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +148,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +160,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +174,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +201,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +216,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  publication parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  publication parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +330,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +348,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +380,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +396,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +415,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +426,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +462,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +475,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +593,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +888,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1081,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1292,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1335,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1409,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1418,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1431,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1460,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1486,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1557,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1568,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1589,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1601,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1613,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1624,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1635,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1646,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1677,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1689,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1771,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1792,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1926,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f423061395..72e893bfd51 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 37f26f6c6b7..1c613f0ef2b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20250930-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20250930-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 89da6a4edfe6a5a7dd3516f5dcad1c708b46f7de Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 15 Sep 2025 12:08:03 +0530
Subject: [PATCH v20250930 3/7] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 196 ++----------------
 src/backend/replication/logical/worker.c      |  22 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  14 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 242 insertions(+), 198 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 33b7ec7f029..d27f6274188 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -970,7 +970,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e6da4028d39..2ba12517e93 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
@@ -1789,21 +1635,21 @@ AllTablesyncsReady(void)
 }
 
 /*
- * Return whether the subscription currently has any relations.
+ * Return whether the subscription currently has any tables.
  *
- * Note: Unlike HasSubscriptionRelations(), this function relies on cached
- * information for subscription relations. Additionally, it should not be
+ * Note: Unlike HasSubscriptionTables(), this function relies on cached
+ * information for subscription tables. Additionally, it should not be
  * invoked outside of apply or tablesync workers, as MySubscription must be
  * initialized first.
  */
 bool
-HasSubscriptionRelationsCached(void)
+HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 419e478b4c6..a85aca2dceb 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1243,7 +1243,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1365,7 +1365,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1421,7 +1421,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1487,7 +1487,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1622,7 +1622,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2464,7 +2464,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4133,7 +4133,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4623,7 +4623,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
 	 * concurrently add tables to the subscription, the apply worker may not
 	 * process invalidations in time. Consequently,
-	 * HasSubscriptionRelationsCached() might miss the new tables, leading to
+	 * HasSubscriptionTablesCached() might miss the new tables, leading to
 	 * premature advancement of oldest_nonremovable_xid.
 	 *
 	 * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
@@ -4637,7 +4637,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * subscription tables at this stage to prevent unnecessary tuple
 	 * retention.
 	 */
-	if (HasSubscriptionRelationsCached() && !AllTablesyncsReady())
+	if (HasSubscriptionTablesCached() && !AllTablesyncsReady())
 	{
 		TimestampTz now;
 
@@ -5876,7 +5876,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 0e442c28514..a1e47781dbe 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5312,7 +5312,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5370,7 +5370,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index de003802612..43d3a835cb2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -272,12 +274,16 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
-extern bool HasSubscriptionRelationsCached(void);
+extern bool HasSubscriptionTablesCached(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 1c613f0ef2b..dc99941209c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2919,7 +2919,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

#342Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#341)
Re: Logical Replication of sequences

On Tue, Sep 30, 2025 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:

In the 0001 patch, pg_get_sequence_data() exposes two new fields
log_cnt and page_lsn. I see that the later subscriber-side patch uses
both, the first one in SetSequence(). It is not clear from the
comments or the commit message of 0001 why it is necessary to use
log_cnt when setting the sequence. Can you explain what the problem
will be if we don't use log_cnt during sequence sync?

--
With Regards,
Amit Kapila.

#343Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#342)
Re: Logical Replication of sequences

On Sat, Oct 04, 2025 at 09:24:32PM +0530, Amit Kapila wrote:

In the 0001 patch, pg_get_sequence_data() exposes two new fields
log_cnt and page_lsn. I see that the later subscriber-side patch uses
both, the first one in SetSequence(). It is not clear from the
comments or the commit message of 0001 why it is necessary to use
log_cnt when setting the sequence. Can you explain what the problem
will be if we don't use log_cnt during sequence sync?

FWIW, I have argued two times at least that it should never be
necessary to expose log_cnt in the sequence meta-data: this is just a
counter to decide when a WAL record of a sequence should be generated.

If you are copying some sequence data over the wire on a new node in a
logical shape where WAL is independent, this counter is irrelevant:
you can just reset it. Please see also a83a944e9fdd.
--
Michael

#344Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#343)
Re: Logical Replication of sequences

On Sun, Oct 5, 2025 at 7:54 AM Michael Paquier <michael@paquier.xyz> wrote:

On Sat, Oct 04, 2025 at 09:24:32PM +0530, Amit Kapila wrote:

In the 0001 patch, pg_get_sequence_data() exposes two new fields
log_cnt and page_lsn. I see that the later subscriber-side patch uses
both, the first one in SetSequence(). It is not clear from the
comments or the commit message of 0001 why it is necessary to use
log_cnt when setting the sequence. Can you explain what the problem
will be if we don't use log_cnt during sequence sync?

FWIW, I have argued two times at least that it should never be
necessary to expose log_cnt in the sequence meta-data: this is just a
counter to decide when a WAL record of a sequence should be generated.

If you are copying some sequence data over the wire on a new node in a
logical shape where WAL is independent, this counter is irrelevant:
you can just reset it. Please see also a83a944e9fdd.

Agreed and I think we have the same behaviour after upgrade.

--
With Regards,
Amit Kapila.

#345vignesh C
vignesh21@gmail.com
In reply to: Michael Paquier (#343)
Re: Logical Replication of sequences

On Sun, 5 Oct 2025 at 07:54, Michael Paquier <michael@paquier.xyz> wrote:

On Sat, Oct 04, 2025 at 09:24:32PM +0530, Amit Kapila wrote:

In the 0001 patch, pg_get_sequence_data() exposes two new fields
log_cnt and page_lsn. I see that the later subscriber-side patch uses
both, the first one in SetSequence(). It is not clear from the
comments or the commit message of 0001 why it is necessary to use
log_cnt when setting the sequence. Can you explain what the problem
will be if we don't use log_cnt during sequence sync?

FWIW, I have argued two times at least that it should never be
necessary to expose log_cnt in the sequence meta-data: this is just a
counter to decide when a WAL record of a sequence should be generated.

Thanks, I have verified that the log_cnt value is not retained after an upgrade:
create sequence s1;
select nextval('s1');
select nextval('s1');

postgres=# select * from s1;
last_value | log_cnt | is_called
------------+---------+-----------
2 | 31 | t
(1 row)

After upgrade:
postgres=# select * from s1;
last_value | log_cnt | is_called
------------+---------+-----------
2 | 0 | t
(1 row)

Since the log_cnt value is not preserved across upgrades, copying it
would have no effect. I’ll remove log_cnt from pg_get_sequence_data
and post an updated version of the patch.

Regards,
Vignesh

#346vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#342)
1 attachment(s)
Re: Logical Replication of sequences

On Sat, 4 Oct 2025 at 21:24, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 30, 2025 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:

In the 0001 patch, pg_get_sequence_data() exposes two new fields
log_cnt and page_lsn. I see that the later subscriber-side patch uses
both, the first one in SetSequence(). It is not clear from the
comments or the commit message of 0001 why it is necessary to use
log_cnt when setting the sequence. Can you explain what the problem
will be if we don't use log_cnt during sequence sync?

I thought to keep the log_cnt value the same value as the publisher.
I have verified from the upgrade that we don't retain the log_cnt
value after upgrade, even if we copy log_cnt, the value will not be
retained. The attached
v20251006-0001-Enhance-pg_get_sequence_data-function.patch has the
changes to remove log_cnt.

Regards,
Vignesh

Attachments:

v20251006-0001-Enhance-pg_get_sequence_data-function.patchapplication/octet-stream; name=v20251006-0001-Enhance-pg_get_sequence_data-function.patchDownload
From cadc820122cf4b62d1eee915b3e59b3c2a8847a6 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 6 Oct 2025 10:14:56 +0530
Subject: [PATCH v20251006] Enhance pg_get_sequence_data function
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch enhances 'pg_get_sequence_data' to return the sequence’s
associated page LSN.

In the subsequent patches, When a sequence is synchronized to the
subscriber, the page LSN of the sequence from the publisher is also
captured and stored in pg_subscription_rel.srsublsn. This LSN will
reflect the state of the sequence at the time of synchronization.
By comparing the current LSN of the sequence on the publisher
(via pg_sequence_state()) with the stored LSN on the subscriber, users
can detect if the sequence has advanced and is now out-of-sync. This
comparison will help determine whether re-synchronization is needed for a
given sequence.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/commands/sequence.c        | 10 ++++++++--
 src/include/catalog/pg_proc.dat        |  4 ++--
 src/test/regress/expected/sequence.out |  8 ++++----
 src/test/regress/sql/sequence.sql      |  2 +-
 4 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 636d3c3ec73..cf46a543364 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -45,6 +45,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
 #include "utils/resowner.h"
 #include "utils/syscache.h"
 #include "utils/varlena.h"
@@ -1795,7 +1796,7 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 
 
 /*
- * Return the sequence tuple.
+ * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
  * without needing to individually query each sequence relation.
@@ -1803,7 +1804,7 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
 {
-#define PG_GET_SEQUENCE_DATA_COLS	2
+#define PG_GET_SEQUENCE_DATA_COLS	3
 	Oid			relid = PG_GETARG_OID(0);
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1818,6 +1819,8 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, "is_called",
 					   BOOLOID, -1, 0);
+	TupleDescInitEntry(resultTupleDesc, (AttrNumber) 3, "page_lsn",
+					   LSNOID, -1, 0);
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
 	init_sequence(relid, &elm, &seqrel);
@@ -1833,11 +1836,14 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 		Buffer		buf;
 		HeapTupleData seqtuple;
 		Form_pg_sequence_data seq;
+		Page		page;
 
 		seq = read_seq_tuple(seqrel, &buf, &seqtuple);
+		page = BufferGetPage(buf);
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
 	}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 01eba3b5a19..fa8686ae00b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3436,8 +3436,8 @@
 { oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
-  proallargtypes => '{regclass,int8,bool}', proargmodes => '{i,o,o}',
-  proargnames => '{sequence_oid,last_value,is_called}',
+  proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
+  proargnames => '{sequence_oid,last_value,is_called,page_lsn}',
   prosrc => 'pg_get_sequence_data' },
 
 { oid => '275', descr => 'return the next oid for a system table',
diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out
index 15925d99c8a..c4454e5b435 100644
--- a/src/test/regress/expected/sequence.out
+++ b/src/test/regress/expected/sequence.out
@@ -840,10 +840,10 @@ SELECT nextval('test_seq1');
 (1 row)
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
- last_value | is_called 
-------------+-----------
-         10 | t
+SELECT last_value, is_called, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
+ last_value | is_called | lsn 
+------------+-----------+-----
+         10 | t         | t
 (1 row)
 
 DROP SEQUENCE test_seq1;
diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql
index 2c220b60749..b3344537929 100644
--- a/src/test/regress/sql/sequence.sql
+++ b/src/test/regress/sql/sequence.sql
@@ -414,6 +414,6 @@ SELECT nextval('test_seq1');
 SELECT nextval('test_seq1');
 
 -- pg_get_sequence_data
-SELECT * FROM pg_get_sequence_data('test_seq1');
+SELECT last_value, is_called, page_lsn <= pg_current_wal_lsn() as lsn FROM pg_get_sequence_data('test_seq1');
 
 DROP SEQUENCE test_seq1;
-- 
2.43.0

#347vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#346)
6 attachment(s)
Re: Logical Replication of sequences

On Mon, 6 Oct 2025 at 12:07, vignesh C <vignesh21@gmail.com> wrote:

On Sat, 4 Oct 2025 at 21:24, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 30, 2025 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:

In the 0001 patch, pg_get_sequence_data() exposes two new fields
log_cnt and page_lsn. I see that the later subscriber-side patch uses
both, the first one in SetSequence(). It is not clear from the
comments or the commit message of 0001 why it is necessary to use
log_cnt when setting the sequence. Can you explain what the problem
will be if we don't use log_cnt during sequence sync?

I thought to keep the log_cnt value the same value as the publisher.
I have verified from the upgrade that we don't retain the log_cnt
value after upgrade, even if we copy log_cnt, the value will not be
retained. The attached
v20251006-0001-Enhance-pg_get_sequence_data-function.patch has the
changes to remove log_cnt.

Here is the rebased remaining patches.

Regards,
Vignesh

Attachments:

v20251006_2-0002-Reorganize-tablesync-Code-and-Introduce-.patchapplication/octet-stream; name=v20251006_2-0002-Reorganize-tablesync-Code-and-Introduce-.patchDownload
From 1fda46b1f2d99981ae1ffb3f659c97b47866c926 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 15 Sep 2025 12:08:03 +0530
Subject: [PATCH v20251006_2 2/6] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 190 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 196 ++----------------
 src/backend/replication/logical/worker.c      |  22 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  14 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 242 insertions(+), 198 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 33b7ec7f029..d27f6274188 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -970,7 +970,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..5109b197805
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e6da4028d39..2ba12517e93 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
@@ -1789,21 +1635,21 @@ AllTablesyncsReady(void)
 }
 
 /*
- * Return whether the subscription currently has any relations.
+ * Return whether the subscription currently has any tables.
  *
- * Note: Unlike HasSubscriptionRelations(), this function relies on cached
- * information for subscription relations. Additionally, it should not be
+ * Note: Unlike HasSubscriptionTables(), this function relies on cached
+ * information for subscription tables. Additionally, it should not be
  * invoked outside of apply or tablesync workers, as MySubscription must be
  * initialized first.
  */
 bool
-HasSubscriptionRelationsCached(void)
+HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 419e478b4c6..a85aca2dceb 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1243,7 +1243,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1365,7 +1365,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1421,7 +1421,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1487,7 +1487,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1622,7 +1622,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2464,7 +2464,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4133,7 +4133,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4623,7 +4623,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
 	 * concurrently add tables to the subscription, the apply worker may not
 	 * process invalidations in time. Consequently,
-	 * HasSubscriptionRelationsCached() might miss the new tables, leading to
+	 * HasSubscriptionTablesCached() might miss the new tables, leading to
 	 * premature advancement of oldest_nonremovable_xid.
 	 *
 	 * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
@@ -4637,7 +4637,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * subscription tables at this stage to prevent unnecessary tuple
 	 * retention.
 	 */
-	if (HasSubscriptionRelationsCached() && !AllTablesyncsReady())
+	if (HasSubscriptionTablesCached() && !AllTablesyncsReady())
 	{
 		TimestampTz now;
 
@@ -5876,7 +5876,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 0e442c28514..a1e47781dbe 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5312,7 +5312,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5370,7 +5370,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index de003802612..43d3a835cb2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -272,12 +274,16 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
-extern bool HasSubscriptionRelationsCached(void);
+extern bool HasSubscriptionTablesCached(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 1c613f0ef2b..dc99941209c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2919,7 +2919,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20251006_2-0001-Introduce-ALL-SEQUENCES-support-for-Post.patchapplication/octet-stream; name=v20251006_2-0001-Introduce-ALL-SEQUENCES-support-for-Post.patchDownload
From 39c7e18b5c891e6776fd3b44ae68db28aac169e8 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 5 Aug 2025 19:39:32 +0530
Subject: [PATCH v20251006_2 1/6] Introduce "ALL SEQUENCES" support for
 PostgreSQL logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c      |   7 +-
 src/backend/commands/publicationcmds.c    | 114 +++--
 src/backend/parser/gram.y                 |  84 +++-
 src/bin/pg_dump/pg_dump.c                 |  89 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   | 202 +++++---
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_publication.h      |   7 +
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 577 ++++++++++++----------
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 14 files changed, 785 insertions(+), 393 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..b306455aaad 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -129,12 +129,16 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
+ *
+ * XXX  This also allows sequences to be included, which is necessary
+ * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -1083,6 +1087,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index f4fc17acbe1..57150fed6c6 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -81,7 +81,8 @@ parse_publication_options(ParseState *pstate,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +91,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -841,17 +842,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	List	   *relations = NIL;
 	List	   *schemaidlist = NIL;
 
+	/* Publication actions are not applicable for sequence-only publications */
+	bool		def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	/* must have CREATE privilege on database */
 	aclresult = object_aclcheck(DatabaseRelationId, MyDatabaseId, GetUserId(), ACL_CREATE);
 	if (aclresult != ACLCHECK_OK)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
+
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!stmt->for_all_tables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -995,10 +1024,30 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  true);
 
 	pubform = (Form_pg_publication) GETSTRUCT(tup);
 
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+	{
+		/*
+		 * WITH clause parameters are not applicable when creating a FOR ALL
+		 * SEQUENCES publication. If the publication includes tables as well,
+		 * issue a notice.
+		 */
+		if (!pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not supported for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
+
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
 	 * table, the partition's row filter and column list will be used. So
@@ -1451,20 +1500,16 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
+	if (tables && (pubform->puballtables || pubform->puballsequences))
 		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
 }
 
 /*
@@ -2014,19 +2059,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 57bf7a7c7f2..9a0395e29b0 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -447,7 +452,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10704,7 +10710,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10724,13 +10735,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10842,6 +10854,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19639,6 +19673,46 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables/all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9fc3671cb35..0e442c28514 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4531,6 +4531,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4561,9 +4562,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4579,6 +4585,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4599,6 +4606,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4650,52 +4659,62 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query, ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index fc5b9b52f80..28794ef85da 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3432,6 +3432,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36b5b2457f9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1758,28 +1758,19 @@ describeOneTableDetails(const char *schemaname,
 	if (tableinfo.relkind == RELKIND_SEQUENCE)
 	{
 		PGresult   *result = NULL;
-		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
-							  "       seqstart AS \"%s\",\n"
-							  "       seqmin AS \"%s\",\n"
-							  "       seqmax AS \"%s\",\n"
-							  "       seqincrement AS \"%s\",\n"
-							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       seqcache AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+							  "       seqstart,\n"
+							  "       seqmin,\n"
+							  "       seqmax,\n"
+							  "       seqincrement,\n"
+							  "       CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+							  "       seqcache\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf,
 							  "FROM pg_catalog.pg_sequence\n"
 							  "WHERE seqrelid = '%s';",
@@ -1788,22 +1779,15 @@ describeOneTableDetails(const char *schemaname,
 		else
 		{
 			printfPQExpBuffer(&buf,
-							  "SELECT 'bigint' AS \"%s\",\n"
-							  "       start_value AS \"%s\",\n"
-							  "       min_value AS \"%s\",\n"
-							  "       max_value AS \"%s\",\n"
-							  "       increment_by AS \"%s\",\n"
-							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
-							  "       cache_value AS \"%s\"\n",
-							  gettext_noop("Type"),
-							  gettext_noop("Start"),
-							  gettext_noop("Minimum"),
-							  gettext_noop("Maximum"),
-							  gettext_noop("Increment"),
+							  "SELECT 'bigint',\n"
+							  "       start_value,\n"
+							  "       min_value,\n"
+							  "       max_value,\n"
+							  "       increment_by,\n"
+							  "       CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+							  "       cache_value\n",
 							  gettext_noop("yes"),
-							  gettext_noop("no"),
-							  gettext_noop("Cycles?"),
-							  gettext_noop("Cache"));
+							  gettext_noop("no"));
 			appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
 			/* must be separate because fmtId isn't reentrant */
 			appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1813,6 +1797,59 @@ describeOneTableDetails(const char *schemaname,
 		if (!res)
 			goto error_return;
 
+		numrows = PQntuples(res);
+
+		/*
+		 * XXX reset to use expanded output for sequences (maybe we should
+		 * keep this disabled, just like for tables?)
+		 */
+		myopt.expanded = pset.popt.topt.expanded;
+
+		printTableInit(&cont, &myopt, title.data, 7, numrows);
+		printTableInitialized = true;
+
+		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
+			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+							  schemaname, relationname);
+		else
+			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+							  schemaname, relationname);
+
+		printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+		printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+		printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+		/* Generate table cells to be printed */
+		for (i = 0; i < numrows; i++)
+		{
+			/* Type */
+			printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+			/* Start */
+			printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+			/* Minimum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+			/* Maximum */
+			printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+			/* Increment */
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+			/* Cycles? */
+			printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+			/* Cache */
+			printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		}
+
+		/* Footer information about a sequence */
+
 		/* Get the column that owns this sequence */
 		printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
 						  "\n   pg_catalog.quote_ident(relname) || '.' ||"
@@ -1844,32 +1881,53 @@ describeOneTableDetails(const char *schemaname,
 			switch (PQgetvalue(result, 0, 1)[0])
 			{
 				case 'a':
-					footers[0] = psprintf(_("Owned by: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Owned by: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 				case 'i':
-					footers[0] = psprintf(_("Sequence for identity column: %s"),
-										  PQgetvalue(result, 0, 0));
+					printTableAddFooter(&cont,
+										psprintf(_("Sequence for identity column: %s"),
+												 PQgetvalue(result, 0, 0)));
 					break;
 			}
 		}
 		PQclear(result);
 
-		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
-			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
-							  schemaname, relationname);
-		else
-			printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
-							  schemaname, relationname);
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			int			tuples;
 
-		myopt.footers = footers;
-		myopt.topt.default_footer = false;
-		myopt.title = title.data;
-		myopt.translate_header = true;
+			printfPQExpBuffer(&buf,
+							  "SELECT pubname\n"
+							  "FROM pg_catalog.pg_publication p\n"
+							  "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+							  "ORDER BY 1;",
+							  oid);
 
-		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+			result = PSQLexec(buf.data);
+			if (!result)
+				goto error_return;
+
+			/* Might be an empty set - that's ok */
+			tuples = PQntuples(result);
+			if (tuples > 0)
+			{
+				printTableAddFooter(&cont, _("Publications:"));
+
+				for (i = 0; i < tuples; i++)
+				{
+					printfPQExpBuffer(&buf, "    \"%s\"",
+									  PQgetvalue(result, i, 0));
+
+					printTableAddFooter(&cont, buf.data);
+				}
+			}
+			PQclear(result);
+		}
 
-		free(footers[0]);
+		printTable(&cont, pset.queryFout, false, pset.logfile);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6456,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6473,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6597,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6612,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6622,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6708,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6723,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6737,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6176741d20b..64bfd309c9a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3585,11 +3585,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..24e09c76649 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 87c1086ec99..0fe4a4ce110 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4294,6 +4294,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4301,6 +4317,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 895ca87a0df..38766b5709e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -69,38 +69,32 @@ ALTER PUBLICATION testpub_foralltables SET (publish = 'insert, update');
 CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Tables or sequences cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't drop schema from 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables DROP TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 -- fail - can't set schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables SET TABLES IN SCHEMA pub_test;
-ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Schemas cannot be added to or dropped from FOR ALL TABLES publications.
+ERROR:  Schemas cannot be added to or dropped from publication defined FOR ALL TABLES, ALL SEQUENCES, or both
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_fortable FOR TABLE testpub_tbl1;
 RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +103,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +127,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +148,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +160,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +174,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +201,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +216,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  publication parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  publication parameters are not supported for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +330,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +348,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +380,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +396,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +415,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +426,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +462,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +475,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +593,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +888,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1081,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1292,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1335,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1409,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1418,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1431,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1460,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1486,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1557,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1568,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1589,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1601,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1613,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1624,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1635,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1646,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1677,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1689,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1771,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1792,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1926,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1948,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f423061395..72e893bfd51 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 37f26f6c6b7..1c613f0ef2b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

v20251006_2-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-.patchapplication/octet-stream; name=v20251006_2-0004-Introduce-REFRESH-PUBLICATION-SEQUENCES-.patchDownload
From 0837dc1b126d0a76ca08e639edcc781d87fbd4a2 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 17 Sep 2025 11:38:19 +0530
Subject: [PATCH v20251006_2 4/6] Introduce "REFRESH PUBLICATION SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_publication.c        |  65 +++-
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/catalog/system_views.sql        |  10 +
 src/backend/commands/subscriptioncmds.c     | 325 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |   2 +-
 src/include/catalog/pg_proc.dat             |   5 +
 src/include/catalog/pg_publication.h        |   2 +-
 src/include/catalog/pg_subscription_rel.h   |  11 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/regress/expected/rules.out         |   8 +
 src/tools/pgindent/typedefs.list            |   1 +
 16 files changed, 397 insertions(+), 116 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..b98d9ae78a6 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -777,8 +777,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -858,14 +858,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable to FOR ALL SEQUENCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -873,12 +875,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1165,7 +1169,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1337,3 +1342,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..94156513ddf 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
+			&& !get_tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 884b6a23817..c33953c2675 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1413cf5c9cc..833b9700763 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 
-				AddSubscriptionRelState(subid, relid, table_state,
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +913,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +926,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +949,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -943,34 +977,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -978,28 +1025,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1069,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1126,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,11 +1143,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1097,6 +1161,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1732,6 +1820,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -2007,7 +2107,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2403,11 +2503,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2593,8 +2697,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2602,15 +2721,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2618,7 +2739,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2636,7 +2757,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2644,11 +2765,20 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2661,7 +2791,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2677,22 +2807,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2700,7 +2839,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c423f6347a9..b592e580846 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10990,6 +10990,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH PUBLICATION SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 5109b197805..45b6d429558 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -152,8 +152,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2ba12517e93..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 92eb17049c3..36dbdb8771b 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 64bfd309c9a..b93f470595c 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2322,7 +2322,7 @@ match_previous_words(int pattern_id,
 					  "ADD PUBLICATION", "DROP PUBLICATION");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+		COMPLETE_WITH("SEQUENCES", "WITH (");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 7c20180637f..5580e5ecc9b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12298,6 +12298,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 24e09c76649..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -170,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..3d6e31a0d6c 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,9 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 10a736b986e..6d07cbc3a9b 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4362,6 +4362,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION_SEQ,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 7f1cb3bb4af..63d8745c7b4 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index dc99941209c..7ce4bb40c05 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2902,6 +2902,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20251006_2-0003-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALT.patchapplication/octet-stream; name=v20251006_2-0003-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALT.patchDownload
From 436af18c4e30accf51759b9d11116f2bf35e9941 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20251006_2 3/6] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 750d262fcca..1413cf5c9cc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1612,8 +1612,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1667,8 +1667,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1692,12 +1692,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1709,8 +1709,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1722,10 +1722,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2322,17 +2322,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2390,13 +2390,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9a0395e29b0..c423f6347a9 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10985,7 +10985,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 0fe4a4ce110..10a736b986e 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4361,7 +4361,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20251006_2-0005-New-worker-for-sequence-synchronization-.patchapplication/octet-stream; name=v20251006_2-0005-New-worker-for-sequence-synchronization-.patchDownload
From b092dc51f86b6f5bed28c82a8c53764309993d8d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 6 Oct 2025 14:21:21 +0530
Subject: [PATCH v20251006_2 5/6] New worker for sequence synchronization
 during subscription management

This patch introduces sequence synchronization:
Sequences have 3 states:
   - INIT (needs synchronizing)
   - DATASYNC (needs re-synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  23 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  60 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 746 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   | 102 +--
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   8 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 239 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1389 insertions(+), 180 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 94156513ddf..0a9ab03ca87 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c33953c2675..9a3e0c20050 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1412,6 +1412,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf46a543364..067c6c68ee8 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -955,8 +954,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1057,7 +1056,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1064,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1080,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1799,7 +1798,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 833b9700763..5c860139933 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1084,7 +1084,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2084,7 +2084,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..99e6f566459 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1625,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1665,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..5d2fabf9615
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,746 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state DATASYNC, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT or DATASYNC state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT / DATASYNC → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+	}
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..bc0f7988a43 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1645,19 +1615,7 @@ AllTablesyncsReady(void)
 bool
 HasSubscriptionTablesCached(void)
 {
-	bool		started_tx;
-	bool		has_subrels;
-
-	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	return has_subrels;
+	return FetchRelationStates(NULL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index a85aca2dceb..826e021d3f3 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4132,7 +4155,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5575,7 +5601,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5695,8 +5722,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5807,6 +5834,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5826,14 +5857,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5904,6 +5937,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5916,9 +5953,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 7e89a8048d5..5f45cbf9dea 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2174,7 +2174,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2192,25 +2192,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2227,6 +2229,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 6bc6be13d2a..f5a09c0f536 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1907,7 +1907,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 5580e5ecc9b..73f8336f1a6 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 3d6e31a0d6c..4d20ebbaf4b 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..bcea652ef61 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 8e8adb01176..cad8c2979f8 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -414,6 +416,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -766,7 +769,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 43d3a835cb2..252a4228d5b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 63d8745c7b4..b9823eeaf34 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2190,6 +2190,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2201,7 +2202,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..f83e6f7a802
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,239 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES should cause sync of
+# new sequences of the publisher, and changes to existing sequences should
+# also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t',
+	'REFRESH PUBLICATION SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 7ce4bb40c05..4dc796d7409 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1627,6 +1627,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251006_2-0006-Documentation-for-sequence-synchronizati.patchapplication/octet-stream; name=v20251006_2-0006-Documentation-for-sequence-synchronizati.patchDownload
From 354fe7b7874da88f5b58c7245f1efb20757bfe0b Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 2 Sep 2025 16:35:21 +0530
Subject: [PATCH v20251006_2 6/6] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 ++
 doc/src/sgml/logical-replication.sgml     | 279 +++++++++++++++++++---
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 ++++-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++++--
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 doc/src/sgml/system-views.sgml            |  66 +++++
 9 files changed, 501 insertions(+), 83 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e9095bedf21..72d597097a3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8186,16 +8186,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8229,7 +8232,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8238,12 +8241,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e9b420f3ddb..7138de1acb8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..9313cbfd1fd 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,20 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1049,24 +1053,24 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 <programlisting><![CDATA[
 /* pub # */ \dRp+
                                          Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" WHERE ((a > 5) AND (c = 'NSW'::text))
 
-                                         Publication p2
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p2
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1"
     "public.t2" WHERE (e = 99)
 
-                                         Publication p3
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p3
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t2" WHERE (d = 10)
     "public.t3" WHERE (g = 10)
@@ -1491,10 +1495,10 @@ Publications:
      for each publication.
 <programlisting>
 /* pub # */ \dRp+
-                                         Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p1
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" (id, a, b, d)
 </programlisting></para>
@@ -1743,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2088,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2421,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2435,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 786aa2ac5f6..6771d1b4479 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2030,8 +2030,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2177,6 +2178,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..8309ca4b039 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-publication-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-publication-sequences">
+    <term><literal>REFRESH PUBLICATION SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH PUBLICATION SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
-- 
2.43.0

#348shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#347)
Re: Logical Replication of sequences

On Mon, Oct 6, 2025 at 4:33 PM vignesh C <vignesh21@gmail.com> wrote:

Here is the rebased remaining patches.

Thank You for the patches, please find a few comment on 001:

1)
Shall we have 'pg_publication_sequences' created in the first patch
itself to help verify which all sequences are added to ALL SEQ
publications? Currently it is in 4th patch.

2)
postgres=# create publication pub1 for all sequences WITH(publish='insert');
ERROR: publication parameters are not supported for publications
defined as FOR ALL SEQUENCES

postgres=# alter publication pub1 add table tab1;
ERROR: Tables or sequences cannot be added to or dropped from
publication defined FOR ALL TABLES, ALL SEQUENCES, or both

a) First msg has 'as', while second does not. Shall we make both the
same? I think we can get rid of 'as'.
b) Shouldn't the error msg start with lower case (second one)?

3)
+ * Process all_objects_list to set all_tables/all_sequences.

can we please replace 'all_tables/all_sequences' with 'all_tables
and/or all_sequences'

4)
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType

Should it be:
'Types of objects supported by FOR ALL publications'

5)
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH
clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse
FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  publication parameters are not applicable to sequence
synchronization and will be ignored

Comment can be changed to say it will emit/raise a NOTICE (instead of warning).

6)
commit msg:
--
Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
--

This seems misleading, as we are not planning the "FOR SEQUENCE" in
the current set of patches, maybe we can rephrase it a bit.

thanks
Shveta

#349Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#348)
1 attachment(s)
Re: Logical Replication of sequences

On Tue, Oct 7, 2025 at 10:53 AM shveta malik <shveta.malik@gmail.com> wrote:

2)
postgres=# create publication pub1 for all sequences WITH(publish='insert');
ERROR: publication parameters are not supported for publications
defined as FOR ALL SEQUENCES

postgres=# alter publication pub1 add table tab1;
ERROR: Tables or sequences cannot be added to or dropped from
publication defined FOR ALL TABLES, ALL SEQUENCES, or both

a) First msg has 'as', while second does not. Shall we make both the
same? I think we can get rid of 'as'.
b) Shouldn't the error msg start with lower case (second one)?

The (b) is related to following change:

+  if (tables && (pubform->puballtables || pubform->puballsequences))
    ereport(ERROR,
-        (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-         errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-            NameStr(pubform->pubname)),
-         errdetail("Tables cannot be added to or dropped from FOR ALL
TABLES publications.")));
+        errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+        errmsg("Tables or sequences cannot be added to or dropped
from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
}
I see a bigger problem where we made errdetail as errmsg. Here, we are
trying to combine the message FOR ALL TABLES and FOR ALL SEQUENCES
which caused this change/confusion. It is better to keep them separate
to avoid confusion. The similar problem exists for following message:
- if (schemaidlist && pubform->puballtables)
+ if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
  ereport(ERROR,
- (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- errmsg("publication \"%s\" is defined as FOR ALL TABLES",
- NameStr(pubform->pubname)),
- errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES
publications.")));
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("Schemas cannot be added to or dropped from publication
defined FOR ALL TABLES, ALL SEQUENCES, or both"));

If we fix both of these then we don't need to do anything for point (a).

Few other comments:
=================
1. Is there a reason not to change just the footers part in
describeOneTableDetails()?
2. I think we can move create_publication.sgml and
pg_publication_sequences related changes to 0001 from doc patch.
3. Atop is_publishable_class(), we mention it has all the checks as
check_publication_add_relation but the sequence check is different. Is
it because check_publication_add_relation() is not called FOR ALL
SEQUENCES? If so, I have modified a few comments in the attached
related to that.
4.
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate,
CreatePublicationStmt *stmt)
    &publish_via_partition_root_given,
    &publish_via_partition_root,
    &publish_generated_columns_given,
-   &publish_generated_columns);
+   &publish_generated_columns,
+   def_pub_action);
+
+ if (stmt->for_all_sequences &&
+ (publish_given || publish_via_partition_root_given ||
+ publish_generated_columns_given))
+ {
+ /*
+ * WITH clause parameters are not applicable when creating a FOR ALL
+ * SEQUENCES publication. If the publication includes tables as well,
+ * issue a notice.
+ */
+ if (!stmt->for_all_tables)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication parameters are not supported for publications
defined as FOR ALL SEQUENCES"));
+
+ ereport(NOTICE,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication parameters are not applicable to sequence
synchronization and will be ignored"));
+ }

This change looks a bit ad hoc to me. I think it would be better to
handle this inside parse_publication_options(). The function can take
the third parameter as for_all_sequences and then use that to give
error when any options are present.

--
With Regards,
Amit Kapila.

Attachments:

v1_amit_1.patch.txttext/plain; charset=US-ASCII; name=v1_amit_1.patch.txtDownload
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b306455aaad..2e78676e793 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -115,8 +115,10 @@ check_publication_add_schema(Oid schemaid)
  * Returns if relation represented by oid and Form_pg_class entry
  * is publishable.
  *
- * Does same checks as check_publication_add_relation() above, but does not
- * need relation to be opened and also does not throw errors.
+ * Does same checks as check_publication_add_relation() above except for
+ * RELKIND_SEQUENCE, but does not need relation to be opened and also does
+ * not throw errors. Here, the additional check is to support ALL SEQUENCES
+ * publication.
  *
  * XXX  This also excludes all tables with relid < FirstNormalObjectId,
  * ie all tables created during initdb.  This mainly affects the preinstalled
@@ -129,9 +131,6 @@ check_publication_add_schema(Oid schemaid)
  * dropped and reloaded and then it'll be considered publishable.  The best
  * long-term solution may be to add a "relispublishable" bool to pg_class,
  * and depend on that instead of OID checks.
- *
- * XXX  This also allows sequences to be included, which is necessary
- * to retrieve the list of sequences for the ALL SEQUENCES publication.
  */
 static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
#350Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#347)
Re: Logical Replication of sequences

On Mon, Oct 6, 2025 at 4:33 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 6 Oct 2025 at 12:07, vignesh C <vignesh21@gmail.com> wrote:

On Sat, 4 Oct 2025 at 21:24, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 30, 2025 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:

In the 0001 patch, pg_get_sequence_data() exposes two new fields
log_cnt and page_lsn. I see that the later subscriber-side patch uses
both, the first one in SetSequence(). It is not clear from the
comments or the commit message of 0001 why it is necessary to use
log_cnt when setting the sequence. Can you explain what the problem
will be if we don't use log_cnt during sequence sync?

I thought to keep the log_cnt value the same value as the publisher.
I have verified from the upgrade that we don't retain the log_cnt
value after upgrade, even if we copy log_cnt, the value will not be
retained. The attached
v20251006-0001-Enhance-pg_get_sequence_data-function.patch has the
changes to remove log_cnt.

Here is the rebased remaining patches.

While testing the patches with different combinations to make
publications, I do not understand why we don't support ALL SEQUENCE
with some table option, or is it future pending work.

postgres[1390699]=# CREATE PUBLICATION pub FOR ALL SEQUENCES, table test;
ERROR: 42601: syntax error at or near "table"
LINE 1: CREATE PUBLICATION pub FOR ALL SEQUENCES, table test;
LOCATION: scanner_yyerror, scan.l:1236

postgres[1390699]=# CREATE PUBLICATION pub FOR table test, ALL SEQUENCES;
ERROR: 42601: syntax error at or near "all"
LINE 1: CREATE PUBLICATION pub FOR table test, all sequences;

I am doing more review/test from a usability perspective, but thought
of asking this, while I am reviewing further.

--
Regards,
Dilip Kumar
Google

#351Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#350)
Re: Logical Replication of sequences

On Tue, Oct 7, 2025 at 2:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Oct 6, 2025 at 4:33 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 6 Oct 2025 at 12:07, vignesh C <vignesh21@gmail.com> wrote:

On Sat, 4 Oct 2025 at 21:24, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 30, 2025 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:

In the 0001 patch, pg_get_sequence_data() exposes two new fields
log_cnt and page_lsn. I see that the later subscriber-side patch uses
both, the first one in SetSequence(). It is not clear from the
comments or the commit message of 0001 why it is necessary to use
log_cnt when setting the sequence. Can you explain what the problem
will be if we don't use log_cnt during sequence sync?

I thought to keep the log_cnt value the same value as the publisher.
I have verified from the upgrade that we don't retain the log_cnt
value after upgrade, even if we copy log_cnt, the value will not be
retained. The attached
v20251006-0001-Enhance-pg_get_sequence_data-function.patch has the
changes to remove log_cnt.

Here is the rebased remaining patches.

While testing the patches with different combinations to make
publications, I do not understand why we don't support ALL SEQUENCE
with some table option, or is it future pending work.

Yes, it is left for future similar to the cases like FOR SEQUENCE s1
or FOR SEQUENCES IN SCHEMA. The key idea was to first support the
cases required for upgrade and we can later extend the feature after
some user feedback or separate discussion with -hackers to see what
others think. Does that sound reasonable to you?

I am doing more review/test from a usability perspective, but thought
of asking this, while I am reviewing further.

Thanks.

--
With Regards,
Amit Kapila.

#352vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#349)
1 attachment(s)
Re: Logical Replication of sequences

On Tue, 7 Oct 2025 at 12:09, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 7, 2025 at 10:53 AM shveta malik <shveta.malik@gmail.com> wrote:

2)
postgres=# create publication pub1 for all sequences WITH(publish='insert');
ERROR: publication parameters are not supported for publications
defined as FOR ALL SEQUENCES

postgres=# alter publication pub1 add table tab1;
ERROR: Tables or sequences cannot be added to or dropped from
publication defined FOR ALL TABLES, ALL SEQUENCES, or both

a) First msg has 'as', while second does not. Shall we make both the
same? I think we can get rid of 'as'.
b) Shouldn't the error msg start with lower case (second one)?

The (b) is related to following change:

+  if (tables && (pubform->puballtables || pubform->puballsequences))
ereport(ERROR,
-        (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-         errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-            NameStr(pubform->pubname)),
-         errdetail("Tables cannot be added to or dropped from FOR ALL
TABLES publications.")));
+        errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+        errmsg("Tables or sequences cannot be added to or dropped
from publication defined FOR ALL TABLES, ALL SEQUENCES, or both"));
}
I see a bigger problem where we made errdetail as errmsg. Here, we are
trying to combine the message FOR ALL TABLES and FOR ALL SEQUENCES
which caused this change/confusion. It is better to keep them separate
to avoid confusion. The similar problem exists for following message:
- if (schemaidlist && pubform->puballtables)
+ if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
ereport(ERROR,
- (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- errmsg("publication \"%s\" is defined as FOR ALL TABLES",
- NameStr(pubform->pubname)),
- errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES
publications.")));
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("Schemas cannot be added to or dropped from publication
defined FOR ALL TABLES, ALL SEQUENCES, or both"));

Modified to keep it as separate error messages.

If we fix both of these then we don't need to do anything for point (a).

Agreed

Few other comments:
=================
1. Is there a reason not to change just the footers part in
describeOneTableDetails()?

I tried the approach to keep it as a footer and it simplifies the code
further. Update the patch accordingly.

2. I think we can move create_publication.sgml and
pg_publication_sequences related changes to 0001 from doc patch.

Modified

3. Atop is_publishable_class(), we mention it has all the checks as
check_publication_add_relation but the sequence check is different. Is
it because check_publication_add_relation() is not called FOR ALL
SEQUENCES? If so, I have modified a few comments in the attached
related to that.

Updated

4.
@@ -878,13 +885,35 @@ CreatePublication(ParseState *pstate,
CreatePublicationStmt *stmt)
&publish_via_partition_root_given,
&publish_via_partition_root,
&publish_generated_columns_given,
-   &publish_generated_columns);
+   &publish_generated_columns,
+   def_pub_action);
+
+ if (stmt->for_all_sequences &&
+ (publish_given || publish_via_partition_root_given ||
+ publish_generated_columns_given))
+ {
+ /*
+ * WITH clause parameters are not applicable when creating a FOR ALL
+ * SEQUENCES publication. If the publication includes tables as well,
+ * issue a notice.
+ */
+ if (!stmt->for_all_tables)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication parameters are not supported for publications
defined as FOR ALL SEQUENCES"));
+
+ ereport(NOTICE,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication parameters are not applicable to sequence
synchronization and will be ignored"));
+ }

This change looks a bit ad hoc to me. I think it would be better to
handle this inside parse_publication_options(). The function can take
the third parameter as for_all_sequences and then use that to give
error when any options are present.

Modified

Thanks for the comments, the attached patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20251007-0001-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20251007-0001-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From f29ad1091a0540cbdfbe89b7c00d74b4ed1df66e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 7 Oct 2025 11:57:48 +0530
Subject: [PATCH v20251007] Introduce "ALL SEQUENCES" support for PostgreSQL
 logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed later.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/logical-replication.sgml     |  42 +-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++-
 doc/src/sgml/system-views.sgml            |  66 +++
 src/backend/catalog/pg_publication.c      |  75 ++-
 src/backend/catalog/system_views.sql      |  10 +
 src/backend/commands/publicationcmds.c    | 139 ++++--
 src/backend/parser/gram.y                 |  85 +++-
 src/bin/pg_dump/pg_dump.c                 |  93 ++--
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   |  84 +++-
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_proc.dat           |   5 +
 src/include/catalog/pg_publication.h      |   9 +-
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 565 +++++++++++++---------
 src/test/regress/expected/rules.out       |   8 +
 src/test/regress/sql/publication.sql      |  44 ++
 src/tools/pgindent/typedefs.list          |   2 +
 20 files changed, 962 insertions(+), 391 deletions(-)

diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..b01f5e998b2 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,18 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>.
   </para>
 
   <para>
@@ -1049,24 +1051,24 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 <programlisting><![CDATA[
 /* pub # */ \dRp+
                                          Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" WHERE ((a > 5) AND (c = 'NSW'::text))
 
-                                         Publication p2
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p2
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1"
     "public.t2" WHERE (e = 99)
 
-                                         Publication p3
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p3
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t2" WHERE (d = 10)
     "public.t3" WHERE (g = 10)
@@ -1491,10 +1493,10 @@ Publications:
      for each publication.
 <programlisting>
 /* pub # */ \dRp+
-                                         Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p1
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" (id, a, b, d)
 </programlisting></para>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..c0eb5fff8de 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable for sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..ac2f4ee3561 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -115,8 +115,10 @@ check_publication_add_schema(Oid schemaid)
  * Returns if relation represented by oid and Form_pg_class entry
  * is publishable.
  *
- * Does same checks as check_publication_add_relation() above, but does not
- * need relation to be opened and also does not throw errors.
+ * Does same checks as check_publication_add_relation() above except for
+ * RELKIND_SEQUENCE, but does not need relation to be opened and also does
+ * not throw errors. Here, the additional check is to support ALL SEQUENCES
+ * publication.
  *
  * XXX  This also excludes all tables with relid < FirstNormalObjectId,
  * ie all tables created during initdb.  This mainly affects the preinstalled
@@ -134,7 +136,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -773,8 +776,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -854,14 +857,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable to FOR ALL SEQUENCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -869,12 +874,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1083,6 +1090,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
@@ -1160,7 +1168,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1332,3 +1341,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 884b6a23817..c33953c2675 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index f4fc17acbe1..63b44a9030f 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -76,12 +76,15 @@ static char defGetGeneratedColsOption(DefElem *def);
 static void
 parse_publication_options(ParseState *pstate,
 						  List *options,
+						  bool allsequences,
+						  bool alltables,
 						  bool *publish_given,
 						  PublicationActions *pubactions,
 						  bool *publish_via_partition_root_given,
 						  bool *publish_via_partition_root,
 						  bool *publish_generated_columns_given,
-						  char *publish_generated_columns)
+						  char *publish_generated_columns,
+						  bool def_pub_action)
 {
 	ListCell   *lc;
 
@@ -90,10 +93,10 @@ parse_publication_options(ParseState *pstate,
 	*publish_generated_columns_given = false;
 
 	/* defaults */
-	pubactions->pubinsert = true;
-	pubactions->pubupdate = true;
-	pubactions->pubdelete = true;
-	pubactions->pubtruncate = true;
+	pubactions->pubinsert = def_pub_action;
+	pubactions->pubupdate = def_pub_action;
+	pubactions->pubdelete = def_pub_action;
+	pubactions->pubtruncate = def_pub_action;
 	*publish_via_partition_root = false;
 	*publish_generated_columns = PUBLISH_GENCOLS_NONE;
 
@@ -168,6 +171,20 @@ parse_publication_options(ParseState *pstate,
 					(errcode(ERRCODE_SYNTAX_ERROR),
 					 errmsg("unrecognized publication parameter: \"%s\"", defel->defname)));
 	}
+
+	if (allsequences &&
+		(*publish_given || *publish_via_partition_root_given ||
+		 *publish_generated_columns_given))
+	{
+		if (!alltables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication parameters are not applicable for publications defined as FOR ALL SEQUENCES"));
+
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored"));
+	}
 }
 
 /*
@@ -836,6 +853,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	bool		publish_via_partition_root_given;
 	bool		publish_via_partition_root;
 	bool		publish_generated_columns_given;
+	bool		def_pub_action;
 	char		publish_generated_columns;
 	AclResult	aclresult;
 	List	   *relations = NIL;
@@ -847,11 +865,14 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -872,19 +893,27 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		DirectFunctionCall1(namein, CStringGetDatum(stmt->pubname));
 	values[Anum_pg_publication_pubowner - 1] = ObjectIdGetDatum(GetUserId());
 
+	/* Publication actions are not applicable for sequence-only publications */
+	def_pub_action = (stmt->for_all_sequences) ? stmt->for_all_tables : true;
+
 	parse_publication_options(pstate,
 							  stmt->options,
+							  stmt->for_all_sequences,
+							  stmt->for_all_tables,
 							  &publish_given, &pubactions,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
+							  &publish_generated_columns,
+							  def_pub_action);
 
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -917,7 +946,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		/* Invalidate relcache so that publication info is rebuilt. */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -989,15 +1018,18 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	List	   *root_relids = NIL;
 	ListCell   *lc;
 
+	pubform = (Form_pg_publication) GETSTRUCT(tup);
+
 	parse_publication_options(pstate,
 							  stmt->options,
+							  pubform->puballsequences,
+							  pubform->puballtables,
 							  &publish_given, &pubactions,
 							  &publish_via_partition_root_given,
 							  &publish_via_partition_root,
 							  &publish_generated_columns_given,
-							  &publish_generated_columns);
-
-	pubform = (Form_pg_publication) GETSTRUCT(tup);
+							  &publish_generated_columns,
+							  true);
 
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
@@ -1451,20 +1483,50 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
-		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
+	{
+		if (pubform->puballtables && pubform->puballsequences)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+						   NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES, ALL SEQUENCES publications."));
+		else if (pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+						   NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications."));
+		else
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+						   NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL SEQUENCES publications."));
+	}
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
-		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+	if (tables && (pubform->puballtables || pubform->puballsequences))
+	{
+		if (pubform->puballtables && pubform->puballsequences)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+						   NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL TABLES, ALL SEQUENCES publications."));
+		else if (pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+						   NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications."));
+		else
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+						   NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL SEQUENCES publications."));
+	}
 }
 
 /*
@@ -2014,19 +2076,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 57bf7a7c7f2..0a605a7a4d5 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -447,7 +452,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10704,7 +10710,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10724,13 +10735,14 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables, &n->for_all_sequences, yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10842,6 +10854,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19639,6 +19673,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables and/or all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9fc3671cb35..7af12c8c543 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4531,6 +4531,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4561,9 +4562,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4579,6 +4585,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4599,6 +4606,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4633,7 +4642,6 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	PQExpBuffer delq;
 	PQExpBuffer query;
 	char	   *qpubname;
-	bool		first = true;
 
 	/* Do nothing if not dumping schema */
 	if (!dopt->dumpSchema)
@@ -4650,52 +4658,65 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
-	appendPQExpBufferStr(query, " WITH (publish = '");
-	if (pubinfo->pubinsert)
+	/* The WITH clause is not applicable to FOR ALL SEQUENCES publications */
+	if (!pubinfo->puballsequences || pubinfo->puballtables)
 	{
-		appendPQExpBufferStr(query, "insert");
-		first = false;
-	}
+		bool		first = true;
 
-	if (pubinfo->pubupdate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+		appendPQExpBufferStr(query, " WITH (publish = '");
+		if (pubinfo->pubinsert)
+		{
+			appendPQExpBufferStr(query, "insert");
+			first = false;
+		}
 
-		appendPQExpBufferStr(query, "update");
-		first = false;
-	}
+		if (pubinfo->pubupdate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-	if (pubinfo->pubdelete)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+			appendPQExpBufferStr(query, "update");
+			first = false;
+		}
 
-		appendPQExpBufferStr(query, "delete");
-		first = false;
-	}
+		if (pubinfo->pubdelete)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
 
-	if (pubinfo->pubtruncate)
-	{
-		if (!first)
-			appendPQExpBufferStr(query, ", ");
+			appendPQExpBufferStr(query, "delete");
+			first = false;
+		}
 
-		appendPQExpBufferStr(query, "truncate");
-		first = false;
-	}
+		if (pubinfo->pubtruncate)
+		{
+			if (!first)
+				appendPQExpBufferStr(query, ", ");
+
+			appendPQExpBufferStr(query, "truncate");
+			first = false;
+		}
 
-	appendPQExpBufferChar(query, '\'');
+		appendPQExpBufferChar(query, '\'');
 
-	if (pubinfo->pubviaroot)
-		appendPQExpBufferStr(query, ", publish_via_partition_root = true");
+		if (pubinfo->pubviaroot)
+			appendPQExpBufferStr(query, ", publish_via_partition_root = true");
 
-	if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
-		appendPQExpBufferStr(query, ", publish_generated_columns = stored");
+		if (pubinfo->pubgencols_type == PUBLISH_GENCOLS_STORED)
+			appendPQExpBufferStr(query,
+								 ", publish_generated_columns = stored");
 
-	appendPQExpBufferStr(query, ");\n");
+		appendPQExpBufferStr(query, ")");
+	}
+
+	appendPQExpBufferStr(query, ";\n");
 
 	if (pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, pubinfo->dobj.catId, pubinfo->dobj.dumpId,
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index fc5b9b52f80..28794ef85da 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3432,6 +3432,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub5' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub5
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub5 FOR ALL SEQUENCES;\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36f24502842 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1759,7 +1759,7 @@ describeOneTableDetails(const char *schemaname,
 	{
 		PGresult   *result = NULL;
 		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
+		char	   *footers[3] = {NULL, NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
@@ -1855,6 +1855,39 @@ describeOneTableDetails(const char *schemaname,
 		}
 		PQclear(result);
 
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			printfPQExpBuffer(&buf, "SELECT pubname FROM pg_catalog.pg_publication p"
+							  "\nWHERE p.puballsequences"
+							  "\n AND pg_catalog.pg_relation_is_publishable('%s')"
+							  "\nORDER BY 1",
+							  oid);
+
+			result = PSQLexec(buf.data);
+			if (result)
+			{
+				int			nrows = PQntuples(result);
+
+				if (nrows > 0)
+				{
+					printfPQExpBuffer(&tmpbuf, _("Publications:"));
+					for (i = 0; i < nrows; i++)
+						appendPQExpBuffer(&tmpbuf, "\n    \"%s\"", PQgetvalue(result, i, 0));
+
+					/* Store in the first available footer slot */
+					if (footers[0] == NULL)
+						footers[0] = pg_strdup(tmpbuf.data);
+					else
+						footers[1] = pg_strdup(tmpbuf.data);
+
+					resetPQExpBuffer(&tmpbuf);
+				}
+
+				PQclear(result);
+			}
+		}
+
 		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
 			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
 							  schemaname, relationname);
@@ -1870,6 +1903,7 @@ describeOneTableDetails(const char *schemaname,
 		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
 
 		free(footers[0]);
+		free(footers[1]);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6432,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6449,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6573,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6588,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6598,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6684,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6699,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6713,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6176741d20b..64bfd309c9a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3585,11 +3585,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 7c20180637f..5580e5ecc9b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12298,6 +12298,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -163,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 87c1086ec99..dc09d1a3f03 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4294,6 +4294,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Types of objects supported by FOR ALL publications
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4301,6 +4317,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 895ca87a0df..1634732db2e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,105 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | f       | f       | f       | f         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause will raise a NOTICE
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  publication parameters are not applicable to sequence synchronization and will be ignored
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+ERROR:  publication parameters are not applicable for publications defined as FOR ALL SEQUENCES
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +336,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +354,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +386,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +402,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +421,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +432,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +468,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +481,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +599,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +894,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1087,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1298,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1341,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1415,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1424,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1437,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1466,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1492,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1563,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1574,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1595,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1607,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1619,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1630,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1641,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1652,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1683,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1695,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1777,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1798,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1932,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1954,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 7f1cb3bb4af..63d8745c7b4 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f423061395..1367d56639b 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,50 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH clause will raise a NOTICE
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1, regress_pub_forallsequences2, regress_pub_for_allsequences_alltables, regress_pub_for_allsequences_alltables_withcaluse;
+
+-- fail - Specifying ALL SEQUENCES along with WITH clause is not supported
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 37f26f6c6b7..1c613f0ef2b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2352,6 +2352,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

#353Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#351)
Re: Logical Replication of sequences

On Tue, Oct 7, 2025 at 3:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 7, 2025 at 2:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Oct 6, 2025 at 4:33 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 6 Oct 2025 at 12:07, vignesh C <vignesh21@gmail.com> wrote:

On Sat, 4 Oct 2025 at 21:24, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 30, 2025 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:

In the 0001 patch, pg_get_sequence_data() exposes two new fields
log_cnt and page_lsn. I see that the later subscriber-side patch uses
both, the first one in SetSequence(). It is not clear from the
comments or the commit message of 0001 why it is necessary to use
log_cnt when setting the sequence. Can you explain what the problem
will be if we don't use log_cnt during sequence sync?

I thought to keep the log_cnt value the same value as the publisher.
I have verified from the upgrade that we don't retain the log_cnt
value after upgrade, even if we copy log_cnt, the value will not be
retained. The attached
v20251006-0001-Enhance-pg_get_sequence_data-function.patch has the
changes to remove log_cnt.

Here is the rebased remaining patches.

While testing the patches with different combinations to make
publications, I do not understand why we don't support ALL SEQUENCE
with some table option, or is it future pending work.

Yes, it is left for future similar to the cases like FOR SEQUENCE s1
or FOR SEQUENCES IN SCHEMA. The key idea was to first support the
cases required for upgrade and we can later extend the feature after
some user feedback or separate discussion with -hackers to see what
others think. Does that sound reasonable to you?

Yeah that's correct, I think the main use case for sequence
synchronization is upgrade and it makes sense to use ALL TABLES/ALL
SEQUENCES for upgrade. However, if a user is using selective tables
for upgrade for now they might not be able to use ALL SEQUENCE and
that should be fine as we are going to provide add on functionality.

I have one more question: while testing the sequence sync, I found
this behavior is documented as well[1]+ <para> + use <link linkend="sql-altersubscription-params-refresh-publication"> + <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> + to synchronize only newly added sequences. + </para> + </listitem> + <listitem> + <para> + use <link linkend="sql-altersubscription-params-refresh-publication-sequences"> + <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION SEQUENCES</command></link> + to re-synchronize all sequences. + </para>, but what's the reasoning
behind it? Why REFRESH PUBLICATION will synchronize only newly added
sequences and need to use REFRESH PUBLICATION SEQUENCES to
re-synchronize all sequences.

I mean what will be the use case where users just want to synchronize
the newly added sequences and not others?

[1]
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link
linkend="sql-altersubscription-params-refresh-publication-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>

--
Regards,
Dilip Kumar
Google

#354vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#348)
Re: Logical Replication of sequences

On Tue, 7 Oct 2025 at 10:53, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Oct 6, 2025 at 4:33 PM vignesh C <vignesh21@gmail.com> wrote:

Here is the rebased remaining patches.

Thank You for the patches, please find a few comment on 001:

1)
Shall we have 'pg_publication_sequences' created in the first patch
itself to help verify which all sequences are added to ALL SEQ
publications? Currently it is in 4th patch.

Moved it to 0001 patch

2)
postgres=# create publication pub1 for all sequences WITH(publish='insert');
ERROR: publication parameters are not supported for publications
defined as FOR ALL SEQUENCES

postgres=# alter publication pub1 add table tab1;
ERROR: Tables or sequences cannot be added to or dropped from
publication defined FOR ALL TABLES, ALL SEQUENCES, or both

a) First msg has 'as', while second does not. Shall we make both the
same? I think we can get rid of 'as'.

The second error message is changed now based on another comment, so
both of them will have as now.

b) Shouldn't the error msg start with lower case (second one)?

Updated

3)
+ * Process all_objects_list to set all_tables/all_sequences.

can we please replace 'all_tables/all_sequences' with 'all_tables
and/or all_sequences'

Updated

4)
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType

Should it be:
'Types of objects supported by FOR ALL publications'

Modified

5)
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH
clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse
FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  publication parameters are not applicable to sequence
synchronization and will be ignored

Comment can be changed to say it will emit/raise a NOTICE (instead of warning).

Modified

6)
commit msg:
--
Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
--

This seems misleading, as we are not planning the "FOR SEQUENCE" in
the current set of patches, maybe we can rephrase it a bit.

Modified

Thanks for the comments, the v20251007 patch attached at [1]/messages/by-id/CALDaNm3NJBiXpQ3uY8=XhSPd6jBn2rTJS6wJZSFo6m2pzW5hqw@mail.gmail.com has the
changes for the same.

[1]: /messages/by-id/CALDaNm3NJBiXpQ3uY8=XhSPd6jBn2rTJS6wJZSFo6m2pzW5hqw@mail.gmail.com

Regards,
Vignesh

#355Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#353)
Re: Logical Replication of sequences

On Tue, Oct 7, 2025 at 4:56 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 3:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 7, 2025 at 2:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Oct 6, 2025 at 4:33 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 6 Oct 2025 at 12:07, vignesh C <vignesh21@gmail.com> wrote:

On Sat, 4 Oct 2025 at 21:24, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 30, 2025 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:

In the 0001 patch, pg_get_sequence_data() exposes two new fields
log_cnt and page_lsn. I see that the later subscriber-side patch uses
both, the first one in SetSequence(). It is not clear from the
comments or the commit message of 0001 why it is necessary to use
log_cnt when setting the sequence. Can you explain what the problem
will be if we don't use log_cnt during sequence sync?

I thought to keep the log_cnt value the same value as the publisher.
I have verified from the upgrade that we don't retain the log_cnt
value after upgrade, even if we copy log_cnt, the value will not be
retained. The attached
v20251006-0001-Enhance-pg_get_sequence_data-function.patch has the
changes to remove log_cnt.

Here is the rebased remaining patches.

While testing the patches with different combinations to make
publications, I do not understand why we don't support ALL SEQUENCE
with some table option, or is it future pending work.

Yes, it is left for future similar to the cases like FOR SEQUENCE s1
or FOR SEQUENCES IN SCHEMA. The key idea was to first support the
cases required for upgrade and we can later extend the feature after
some user feedback or separate discussion with -hackers to see what
others think. Does that sound reasonable to you?

Yeah that's correct, I think the main use case for sequence
synchronization is upgrade and it makes sense to use ALL TABLES/ALL
SEQUENCES for upgrade. However, if a user is using selective tables
for upgrade for now they might not be able to use ALL SEQUENCE and
that should be fine as we are going to provide add on functionality.

Yes, we can add the additional functionality for selective sequences
if required but do we have an option to allow upgrade of selective
tables?

I have one more question: while testing the sequence sync, I found
this behavior is documented as well[1], but what's the reasoning
behind it? Why REFRESH PUBLICATION will synchronize only newly added
sequences and need to use REFRESH PUBLICATION SEQUENCES to
re-synchronize all sequences.

The idea is that REFRESH PUBLICATION should behave similarly for
tables and sequences. This means that this command is primarily used
to add/remove tables/sequences and copy their respective initial
contents. The new command REFRESH PUBLICATION SEQUENCES is to sync the
existing sequences, it shouldn't add any new sequences, now, if it is
too confusing we can discuss having a different syntax for it. In the
meantime, let's make the 0001 publication-side patch ready for commit.

--
With Regards,
Amit Kapila.

#356Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#355)
Re: Logical Replication of sequences

On Tue, Oct 7, 2025 at 5:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 7, 2025 at 4:56 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 3:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 7, 2025 at 2:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Oct 6, 2025 at 4:33 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 6 Oct 2025 at 12:07, vignesh C <vignesh21@gmail.com> wrote:

On Sat, 4 Oct 2025 at 21:24, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 30, 2025 at 9:55 PM vignesh C <vignesh21@gmail.com> wrote:

In the 0001 patch, pg_get_sequence_data() exposes two new fields
log_cnt and page_lsn. I see that the later subscriber-side patch uses
both, the first one in SetSequence(). It is not clear from the
comments or the commit message of 0001 why it is necessary to use
log_cnt when setting the sequence. Can you explain what the problem
will be if we don't use log_cnt during sequence sync?

I thought to keep the log_cnt value the same value as the publisher.
I have verified from the upgrade that we don't retain the log_cnt
value after upgrade, even if we copy log_cnt, the value will not be
retained. The attached
v20251006-0001-Enhance-pg_get_sequence_data-function.patch has the
changes to remove log_cnt.

Here is the rebased remaining patches.

While testing the patches with different combinations to make
publications, I do not understand why we don't support ALL SEQUENCE
with some table option, or is it future pending work.

Yes, it is left for future similar to the cases like FOR SEQUENCE s1
or FOR SEQUENCES IN SCHEMA. The key idea was to first support the
cases required for upgrade and we can later extend the feature after
some user feedback or separate discussion with -hackers to see what
others think. Does that sound reasonable to you?

Yeah that's correct, I think the main use case for sequence
synchronization is upgrade and it makes sense to use ALL TABLES/ALL
SEQUENCES for upgrade. However, if a user is using selective tables
for upgrade for now they might not be able to use ALL SEQUENCE and
that should be fine as we are going to provide add on functionality.

Yes, we can add the additional functionality for selective sequences
if required but do we have an option to allow upgrade of selective
tables?

If the user is upgrading using logical replication, then there is an
option to set up a replication from the current version to the next
major version and then the user can selectively publish the table
which is supposed to be streamed to the next major version. right?

I have one more question: while testing the sequence sync, I found
this behavior is documented as well[1], but what's the reasoning
behind it? Why REFRESH PUBLICATION will synchronize only newly added
sequences and need to use REFRESH PUBLICATION SEQUENCES to
re-synchronize all sequences.

The idea is that REFRESH PUBLICATION should behave similarly for
tables and sequences. This means that this command is primarily used
to add/remove tables/sequences and copy their respective initial
contents. The new command REFRESH PUBLICATION SEQUENCES is to sync the
existing sequences, it shouldn't add any new sequences, now, if it is
too confusing we can discuss having a different syntax for it.

Sure, let's discuss this when we get this patch at the start of the
commit queue.

In the

meantime, let's make the 0001 publication-side patch ready for commit.

Make sense.

--
Regards,
Dilip Kumar
Google

#357Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#356)
Re: Logical Replication of sequences

On Wed, Oct 8, 2025 at 9:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 5:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Yes, we can add the additional functionality for selective sequences
if required but do we have an option to allow upgrade of selective
tables?

If the user is upgrading using logical replication, then there is an
option to set up a replication from the current version to the next
major version and then the user can selectively publish the table
which is supposed to be streamed to the next major version. right?

Yes, that is possible. However, during the upgrade of the publisher
node, if the user wants to shift the workload to the subscriber then
ideally she should sync all sequences. But I think there could be
cases where users may wish to selectively replicate sequences, so we
can consider such cases as well once the main feature is committed.

--
With Regards,
Amit Kapila.

#358shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#354)
Re: Logical Replication of sequences

On Tue, Oct 7, 2025 at 4:56 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 7 Oct 2025 at 10:53, shveta malik <shveta.malik@gmail.com> wrote:

On Mon, Oct 6, 2025 at 4:33 PM vignesh C <vignesh21@gmail.com> wrote:

Here is the rebased remaining patches.

Thank You for the patches, please find a few comment on 001:

1)
Shall we have 'pg_publication_sequences' created in the first patch
itself to help verify which all sequences are added to ALL SEQ
publications? Currently it is in 4th patch.

Moved it to 0001 patch

2)
postgres=# create publication pub1 for all sequences WITH(publish='insert');
ERROR: publication parameters are not supported for publications
defined as FOR ALL SEQUENCES

postgres=# alter publication pub1 add table tab1;
ERROR: Tables or sequences cannot be added to or dropped from
publication defined FOR ALL TABLES, ALL SEQUENCES, or both

a) First msg has 'as', while second does not. Shall we make both the
same? I think we can get rid of 'as'.

The second error message is changed now based on another comment, so
both of them will have as now.

b) Shouldn't the error msg start with lower case (second one)?

Updated

3)
+ * Process all_objects_list to set all_tables/all_sequences.

can we please replace 'all_tables/all_sequences' with 'all_tables
and/or all_sequences'

Updated

4)
+/*
+ * Publication types supported by FOR ALL ...
+ */
+typedef enum PublicationAllObjType

Should it be:
'Types of objects supported by FOR ALL publications'

Modified

5)
+-- Specifying both ALL TABLES and ALL SEQUENCES along with WITH
clause should throw a warning
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withcaluse
FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  publication parameters are not applicable to sequence
synchronization and will be ignored

Comment can be changed to say it will emit/raise a NOTICE (instead of warning).

Modified

6)
commit msg:
--
Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed in a subsequent patch.
--

This seems misleading, as we are not planning the "FOR SEQUENCE" in
the current set of patches, maybe we can rephrase it a bit.

Modified

Thanks for the comments, the v20251007 patch attached at [1] has the
changes for the same.

Thanks for making the changes. Just one trivial comment:

We can have 'applicable to' instead of 'applicable for' in all these places:

a)
+ /* Publication actions are not applicable for sequence-only publications */

b)
+ publishing tables. This clause is not applicable for sequences. The

c)
+ errmsg("publication parameters are not applicable for publications
defined as FOR ALL SEQUENCES"));

thanks
Shveta

#359Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#352)
Re: Logical Replication of sequences

On Tue, Oct 7, 2025 at 4:52 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, the attached patch has the changes for the same.

 parse_publication_options(ParseState *pstate,
    List *options,
+   bool allsequences,
+   bool alltables,
    bool *publish_given,
    PublicationActions *pubactions,
    bool *publish_via_partition_root_given,
    bool *publish_via_partition_root,
    bool *publish_generated_columns_given,
-   char *publish_generated_columns)
+   char *publish_generated_columns,
+   bool def_pub_action)
{
…
+
+ if (allsequences &&
+ (*publish_given || *publish_via_partition_root_given ||
+ *publish_generated_columns_given))
+ {
+ if (!alltables)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication parameters are not applicable for publications
defined as FOR ALL SEQUENCES"));

I think we can let users specify publication parameters even for
sequence_only publication as well. Because users could then later add
tables to it by Alter Publication .. Add .. The notice should be
sufficient and also then it would bebetter to give it outside this
function as that could be extended in future when we would allow a mix
of sequence and table publications.

--
With Regards,
Amit Kapila.

#360Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#352)
Re: Logical Replication of sequences

On Tue, Oct 7, 2025 at 4:52 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 7 Oct 2025 at 12:09, Amit Kapila <amit.kapila16@gmail.com> wrote:

I think the patch is mostly LGTM, I have 2 suggestions, see if you
think this is useful.

1.
postgres[1390699]=# CREATE PUBLICATION pub FOR ALL SEQUENCES, ALL
TABLES WITH (publish = insert);
NOTICE: 55000: publication parameters are not applicable to sequence
synchronization and will be ignored
LOCATION: CreatePublication, publicationcmds.c:905

IMHO this notice seems confusing, from this it appears that (publish =
insert) is ignored completely, but actually it is is not ignored for
table, should we explicitely say that it will be ignored only for
sequences. Something like below?

"publication parameters are not applicable to sequence synchronization
so it will be used only for tables and will be ignored for sequence
synchronization"
or
"publication parameters are not applicable to sequence synchronization
so it will be ignored for the sequence synchronization"

2.
+ if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
+ {
+ if (pubform->puballtables && pubform->puballsequences)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+    NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL
TABLES, ALL SEQUENCES publications."));
+ else if (pubform->puballtables)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+    NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES
publications."));
+ else
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+    NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL
SEQUENCES publications."));
+ }

Can't we make this a single generic error message instead of
duplicating for each combination? Something like

errmsg("publication \"%s\" is defined as FOR ALL TABLES or ALL SEQUENCES",
NameStr(pubform->pubname)),
errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES
or ALL SEQUENCES publications."));

--
Regards,
Dilip Kumar
Google

#361Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#360)
Re: Logical Replication of sequences

On Wed, Oct 8, 2025 at 2:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 4:52 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 7 Oct 2025 at 12:09, Amit Kapila <amit.kapila16@gmail.com> wrote:

I think the patch is mostly LGTM, I have 2 suggestions, see if you
think this is useful.

1.
postgres[1390699]=# CREATE PUBLICATION pub FOR ALL SEQUENCES, ALL
TABLES WITH (publish = insert);
NOTICE: 55000: publication parameters are not applicable to sequence
synchronization and will be ignored
LOCATION: CreatePublication, publicationcmds.c:905

IMHO this notice seems confusing, from this it appears that (publish =
insert) is ignored completely, but actually it is is not ignored for
table, should we explicitely say that it will be ignored only for
sequences. Something like below?

"publication parameters are not applicable to sequence synchronization
so it will be used only for tables and will be ignored for sequence
synchronization"
or
"publication parameters are not applicable to sequence synchronization
so it will be ignored for the sequence synchronization"

How about a slightly shorter form like: 'publication parameters are
not applicable to sequence synchronization and will be ignored for
sequences'?

2.
+ if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
+ {
+ if (pubform->puballtables && pubform->puballsequences)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+    NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL
TABLES, ALL SEQUENCES publications."));
+ else if (pubform->puballtables)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+    NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES
publications."));
+ else
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+    NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL
SEQUENCES publications."));
+ }

Can't we make this a single generic error message instead of
duplicating for each combination? Something like

errmsg("publication \"%s\" is defined as FOR ALL TABLES or ALL SEQUENCES",
NameStr(pubform->pubname)),
errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES
or ALL SEQUENCES publications."));

Yeah, we can do that but Note that these messages are for the
existing publication and we are aware of its publicized contents, so
we can give clear messages to users. Why make it ambiguous?

--
With Regards,
Amit Kapila.

#362Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#361)
Re: Logical Replication of sequences

On Wed, 8 Oct 2025 at 3:15 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 8, 2025 at 2:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 4:52 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 7 Oct 2025 at 12:09, Amit Kapila <amit.kapila16@gmail.com>

wrote:

I think the patch is mostly LGTM, I have 2 suggestions, see if you
think this is useful.

1.
postgres[1390699]=# CREATE PUBLICATION pub FOR ALL SEQUENCES, ALL
TABLES WITH (publish = insert);
NOTICE: 55000: publication parameters are not applicable to sequence
synchronization and will be ignored
LOCATION: CreatePublication, publicationcmds.c:905

IMHO this notice seems confusing, from this it appears that (publish =
insert) is ignored completely, but actually it is is not ignored for
table, should we explicitely say that it will be ignored only for
sequences. Something like below?

"publication parameters are not applicable to sequence synchronization
so it will be used only for tables and will be ignored for sequence
synchronization"
or
"publication parameters are not applicable to sequence synchronization
so it will be ignored for the sequence synchronization"

How about a slightly shorter form like: 'publication parameters are
not applicable to sequence synchronization and will be ignored for
sequences'?

Works for me.

2.

+ if (schemaidlist && (pubform->puballtables ||

pubform->puballsequences))

+ {
+ if (pubform->puballtables && pubform->puballsequences)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL

SEQUENCES",

+    NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL
TABLES, ALL SEQUENCES publications."));
+ else if (pubform->puballtables)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+    NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES
publications."));
+ else
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+    NameStr(pubform->pubname)),
+ errdetail("Schemas cannot be added to or dropped from FOR ALL
SEQUENCES publications."));
+ }

Can't we make this a single generic error message instead of
duplicating for each combination? Something like

errmsg("publication \"%s\" is defined as FOR ALL TABLES or ALL

SEQUENCES",

NameStr(pubform->pubname)),
errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES
or ALL SEQUENCES publications."));

Yeah, we can do that but Note that these messages are for the
existing publication and we are aware of its publicized contents, so
we can give clear messages to users. Why make it ambiguous?

Hmm, yeah that makes sense.


Dilip

#363vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#362)
1 attachment(s)
Re: Logical Replication of sequences

On Wed, 8 Oct 2025 at 15:38, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, 8 Oct 2025 at 3:15 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 8, 2025 at 2:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 4:52 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 7 Oct 2025 at 12:09, Amit Kapila <amit.kapila16@gmail.com> wrote:

I think the patch is mostly LGTM, I have 2 suggestions, see if you
think this is useful.

1.
postgres[1390699]=# CREATE PUBLICATION pub FOR ALL SEQUENCES, ALL
TABLES WITH (publish = insert);
NOTICE: 55000: publication parameters are not applicable to sequence
synchronization and will be ignored
LOCATION: CreatePublication, publicationcmds.c:905

IMHO this notice seems confusing, from this it appears that (publish =
insert) is ignored completely, but actually it is is not ignored for
table, should we explicitely say that it will be ignored only for
sequences. Something like below?

"publication parameters are not applicable to sequence synchronization
so it will be used only for tables and will be ignored for sequence
synchronization"
or
"publication parameters are not applicable to sequence synchronization
so it will be ignored for the sequence synchronization"

How about a slightly shorter form like: 'publication parameters are
not applicable to sequence synchronization and will be ignored for
sequences'?

Works for me.

Thanks for the comments, here is an updated version with a fix to handle this.

Regards,
Vignesh

Attachments:

v20251008-0001-Introduce-ALL-SEQUENCES-support-for-Postgr.patchapplication/octet-stream; name=v20251008-0001-Introduce-ALL-SEQUENCES-support-for-Postgr.patchDownload
From 4d4b414736b1a03993ff2f88e755df77732ece0e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 7 Oct 2025 11:57:48 +0530
Subject: [PATCH v20251008] Introduce "ALL SEQUENCES" support for PostgreSQL
 logical replication

This commit enhances logical replication by enabling the inclusion of all
sequences in publications.

Furthermore, enhancements to psql commands now display which
publications contain the specified sequence (\d command), and if a
specified publication includes all sequences (\dRp command).

Note: This patch currently supports only the "ALL SEQUENCES" clause.
Handling of clauses such as "FOR SEQUENCE" and "FOR SEQUENCES IN SCHEMA"
will be addressed later.

"ALL SEQUENCES" can be combined with "ALL TABLES" (e.g., 'FOR ALL SEQUENCES, ALL TABLES')
in a 'FOR ALL' publication. It cannot be combined with other options
such as TABLE or TABLES IN SCHEMA.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  10 +
 doc/src/sgml/logical-replication.sgml     |  42 +-
 doc/src/sgml/ref/alter_publication.sgml   |   6 +-
 doc/src/sgml/ref/create_publication.sgml  |  74 ++-
 doc/src/sgml/system-views.sgml            |  66 +++
 src/backend/catalog/pg_publication.c      |  75 ++-
 src/backend/catalog/system_views.sql      |  10 +
 src/backend/commands/publicationcmds.c    | 116 +++--
 src/backend/parser/gram.y                 |  87 +++-
 src/bin/pg_dump/pg_dump.c                 |  19 +-
 src/bin/pg_dump/pg_dump.h                 |   1 +
 src/bin/pg_dump/t/002_pg_dump.pl          |  21 +
 src/bin/psql/describe.c                   |  84 +++-
 src/bin/psql/tab-complete.in.c            |   6 +-
 src/include/catalog/pg_proc.dat           |   5 +
 src/include/catalog/pg_publication.h      |   9 +-
 src/include/nodes/parsenodes.h            |  18 +
 src/test/regress/expected/psql.out        |   6 +-
 src/test/regress/expected/publication.out | 570 +++++++++++++---------
 src/test/regress/expected/rules.out       |   8 +
 src/test/regress/sql/publication.sql      |  46 ++
 src/tools/pgindent/typedefs.list          |   2 +
 22 files changed, 928 insertions(+), 353 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 3acc2222a87..9b3aae8603b 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6374,6 +6374,16 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>puballsequences</structfield> <type>bool</type>
+      </para>
+      <para>
+       If true, this publication automatically includes all sequences
+       in the database, including any that will be created in the future.
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>pubinsert</structfield> <type>bool</type>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 9ccd5ec5006..b01f5e998b2 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -102,16 +102,18 @@
    A <firstterm>publication</firstterm> can be defined on any physical
    replication primary.  The node where a publication is defined is referred to
    as <firstterm>publisher</firstterm>.  A publication is a set of changes
-   generated from a table or a group of tables, and might also be described as
-   a change set or replication set.  Each publication exists in only one database.
+   generated from a table, a group of tables or the current state of all
+   sequences, and might also be described as a change set or replication set.
+   Each publication exists in only one database.
   </para>
 
   <para>
    Publications are different from schemas and do not affect how the table is
    accessed.  Each table can be added to multiple publications if needed.
-   Publications may currently only contain tables and all tables in schema.
-   Objects must be added explicitly, except when a publication is created for
-   <literal>ALL TABLES</literal>.
+   Publications may currently only contain tables or sequences. Objects must be
+   added explicitly, except when a publication is created using
+   <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
+   or <literal>FOR ALL SEQUENCES</literal>.
   </para>
 
   <para>
@@ -1049,24 +1051,24 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 <programlisting><![CDATA[
 /* pub # */ \dRp+
                                          Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" WHERE ((a > 5) AND (c = 'NSW'::text))
 
-                                         Publication p2
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p2
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1"
     "public.t2" WHERE (e = 99)
 
-                                         Publication p3
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p3
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t2" WHERE (d = 10)
     "public.t3" WHERE (g = 10)
@@ -1491,10 +1493,10 @@ Publications:
      for each publication.
 <programlisting>
 /* pub # */ \dRp+
-                                         Publication p1
-  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
-----------+------------+---------+---------+---------+-----------+-------------------+----------
- postgres | f          | t       | t       | t       | t         | none              | f
+                                                Publication p1
+  Owner   | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root
+----------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ postgres | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.t1" (id, a, b, d)
 </programlisting></para>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index d5ea383e8bc..c36e754f887 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -82,8 +82,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
    new owning role, and that role must have <literal>CREATE</literal>
    privilege on the database.
    Also, the new owner of a
-   <link linkend="sql-createpublication-params-for-all-tables"><literal>FOR ALL TABLES</literal></link>
-   or <link linkend="sql-createpublication-params-for-tables-in-schema"><literal>FOR TABLES IN SCHEMA</literal></link>
+   <link linkend="sql-createpublication-params-for-tables-in-schema"><literal>FOR TABLES IN SCHEMA</literal></link>
+   or <link linkend="sql-createpublication-params-for-all-tables"><literal>FOR ALL TABLES</literal></link>
+   or <link linkend="sql-createpublication-params-for-all-sequences"><literal>FOR ALL SEQUENCES</literal></link>
    publication must be a superuser. However, a superuser can
    change the ownership of a publication regardless of these restrictions.
   </para>
@@ -153,6 +154,7 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
      <para>
       This clause alters publication parameters originally set by
       <xref linkend="sql-createpublication"/>.  See there for more information.
+      This clause is not applicable to sequences.
      </para>
      <caution>
       <para>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 802630f2df1..66a70e5c5b5 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,18 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
-    [ FOR ALL TABLES
-      | FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
+    [ FOR { <replaceable class="parameter">publication_object</replaceable> [, ... ] | <replaceable class="parameter">all_publication_object</replaceable> [, ... ] } ]
     [ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 
 <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
 
     TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
     TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+
+<phrase>where <replaceable class="parameter">all_publication_object</replaceable> is one of:</phrase>
+
+    ALL TABLES
+    ALL SEQUENCES
 </synopsis>
  </refsynopsisdiv>
 
@@ -120,16 +124,6 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
-   <varlistentry id="sql-createpublication-params-for-all-tables">
-    <term><literal>FOR ALL TABLES</literal></term>
-    <listitem>
-     <para>
-      Marks the publication as one that replicates changes for all tables in
-      the database, including tables created in the future.
-     </para>
-    </listitem>
-   </varlistentry>
-
    <varlistentry id="sql-createpublication-params-for-tables-in-schema">
     <term><literal>FOR TABLES IN SCHEMA</literal></term>
     <listitem>
@@ -161,11 +155,37 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-createpublication-params-for-all-tables">
+    <term><literal>FOR ALL TABLES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that replicates changes for all tables in
+      the database, including tables created in the future.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry id="sql-createpublication-params-for-all-sequences">
+    <term><literal>FOR ALL SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Marks the publication as one that synchronizes changes for all sequences
+      in the database, including sequences created in the future.
+     </para>
+
+     <para>
+      Only persistent sequences are included in the publication. Temporary
+      sequences and unlogged sequences are excluded from the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-createpublication-params-with">
     <term><literal>WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
     <listitem>
      <para>
-      This clause specifies optional parameters for a publication.  The
+      This clause specifies optional parameters for a publication when
+      publishing tables. This clause is not applicable to sequences. The
       following parameters are supported:
 
       <variablelist>
@@ -279,10 +299,10 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
   <title>Notes</title>
 
   <para>
-   If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
-   <literal>FOR TABLES IN SCHEMA</literal> are not specified, then the
-   publication starts out with an empty set of tables.  That is useful if
-   tables or schemas are to be added later.
+   If <literal>FOR TABLE</literal>, <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> or <literal>FOR ALL SEQUENCES</literal>
+   are not specified, then the publication starts out with an empty set of
+   tables.  That is useful if tables or schemas are to be added later.
   </para>
 
   <para>
@@ -298,8 +318,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
 
   <para>
    To add a table to a publication, the invoking user must have ownership
-   rights on the table.  The <command>FOR ALL TABLES</command> and
-   <command>FOR TABLES IN SCHEMA</command> clauses require the invoking
+   rights on the table.  The <literal>FOR TABLES IN SCHEMA</literal>,
+   <literal>FOR ALL TABLES</literal> and
+   <literal>FOR ALL SEQUENCES</literal> clauses require the invoking
    user to be a superuser.
   </para>
 
@@ -449,6 +470,21 @@ CREATE PUBLICATION sales_publication FOR TABLES IN SCHEMA marketing, sales;
 <programlisting>
 CREATE PUBLICATION users_filtered FOR TABLE users (user_id, firstname);
 </programlisting></para>
+
+  <para>
+   Create a publication that publishes all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_sequences FOR ALL SEQUENCES;
+</programlisting>
+  </para>
+
+  <para>
+   Create a publication that publishes all changes in all tables, and
+   all sequences for synchronization:
+<programlisting>
+CREATE PUBLICATION all_tables_sequences FOR ALL TABLES, ALL SEQUENCES;
+</programlisting>
+  </para>
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index 4187191ea74..7971498fe75 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -136,6 +136,11 @@
       <entry>prepared transactions</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+      <entry>publications and information of their associated sequences</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
       <entry>publications and information of their associated tables</entry>
@@ -2549,6 +2554,67 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
 
  </sect1>
 
+ <sect1 id="view-pg-publication-sequences">
+  <title><structname>pg_publication_sequences</structname></title>
+
+  <indexterm zone="view-pg-publication-sequences">
+   <primary>pg_publication_sequences</primary>
+  </indexterm>
+
+  <para>
+   The view <structname>pg_publication_sequences</structname> provides
+   information about the mapping between publications and sequences.
+  </para>
+
+  <table>
+   <title><structname>pg_publication_sequences</structname> Columns</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pubname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+      </para>
+      <para>
+       Name of publication
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>schemaname</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+      </para>
+      <para>
+       Name of schema containing sequence
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequencename</structfield> <type>name</type>
+       (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+      </para>
+      <para>
+       Name of sequence
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+ </sect1>
+
  <sect1 id="view-pg-publication-tables">
   <title><structname>pg_publication_tables</structname></title>
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b911efcf9cb..ac2f4ee3561 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -115,8 +115,10 @@ check_publication_add_schema(Oid schemaid)
  * Returns if relation represented by oid and Form_pg_class entry
  * is publishable.
  *
- * Does same checks as check_publication_add_relation() above, but does not
- * need relation to be opened and also does not throw errors.
+ * Does same checks as check_publication_add_relation() above except for
+ * RELKIND_SEQUENCE, but does not need relation to be opened and also does
+ * not throw errors. Here, the additional check is to support ALL SEQUENCES
+ * publication.
  *
  * XXX  This also excludes all tables with relid < FirstNormalObjectId,
  * ie all tables created during initdb.  This mainly affects the preinstalled
@@ -134,7 +136,8 @@ static bool
 is_publishable_class(Oid relid, Form_pg_class reltuple)
 {
 	return (reltuple->relkind == RELKIND_RELATION ||
-			reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+			reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+			reltuple->relkind == RELKIND_SEQUENCE) &&
 		!IsCatalogRelationOid(relid) &&
 		reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
 		relid >= FirstNormalObjectId;
@@ -773,8 +776,8 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
+ * should use GetAllPublicationRelations().
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
@@ -854,14 +857,16 @@ GetAllTablesPublications(void)
 }
 
 /*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
+ * Gets list of all relations published by FOR ALL TABLES/SEQUENCES
+ * publication(s).
  *
  * If the publication publishes partition changes via their respective root
  * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * root partitioned tables. This is not applicable to FOR ALL SEQUENCES
+ * publication.
  */
 List *
-GetAllTablesPublicationRelations(bool pubviaroot)
+GetAllPublicationRelations(char relkind, bool pubviaroot)
 {
 	Relation	classRel;
 	ScanKeyData key[1];
@@ -869,12 +874,14 @@ GetAllTablesPublicationRelations(bool pubviaroot)
 	HeapTuple	tuple;
 	List	   *result = NIL;
 
+	Assert(!(relkind == RELKIND_SEQUENCE && pubviaroot));
+
 	classRel = table_open(RelationRelationId, AccessShareLock);
 
 	ScanKeyInit(&key[0],
 				Anum_pg_class_relkind,
 				BTEqualStrategyNumber, F_CHAREQ,
-				CharGetDatum(RELKIND_RELATION));
+				CharGetDatum(relkind));
 
 	scan = table_beginscan_catalog(classRel, 1, key);
 
@@ -1083,6 +1090,7 @@ GetPublication(Oid pubid)
 	pub->oid = pubid;
 	pub->name = pstrdup(NameStr(pubform->pubname));
 	pub->alltables = pubform->puballtables;
+	pub->allsequences = pubform->puballsequences;
 	pub->pubactions.pubinsert = pubform->pubinsert;
 	pub->pubactions.pubupdate = pubform->pubupdate;
 	pub->pubactions.pubdelete = pubform->pubdelete;
@@ -1160,7 +1168,8 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			 * those. Otherwise, get the partitioned table itself.
 			 */
 			if (pub_elem->alltables)
-				pub_elem_tables = GetAllTablesPublicationRelations(pub_elem->pubviaroot);
+				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
+															 pub_elem->pubviaroot);
 			else
 			{
 				List	   *relids,
@@ -1332,3 +1341,49 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 
 	SRF_RETURN_DONE(funcctx);
 }
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	List	   *sequences = NIL;
+
+	/* stuff done only on the first call of the function */
+	if (SRF_IS_FIRSTCALL())
+	{
+		char	   *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+		Publication *publication;
+		MemoryContext oldcontext;
+
+		/* create a function context for cross-call persistence */
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* switch to memory context appropriate for multiple function calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		publication = GetPublicationByName(pubname, false);
+
+		if (publication->allsequences)
+			sequences = GetAllPublicationRelations(RELKIND_SEQUENCE, false);
+
+		funcctx->user_fctx = (void *) sequences;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	/* stuff done on every call of the function */
+	funcctx = SRF_PERCALL_SETUP();
+	sequences = (List *) funcctx->user_fctx;
+
+	if (funcctx->call_cntr < list_length(sequences))
+	{
+		Oid			relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+		SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+	}
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index c94f1f05f52..2c9638d6a3e 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -394,6 +394,16 @@ CREATE VIEW pg_publication_tables AS
          pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
     WHERE C.oid = GPT.relid;
 
+CREATE VIEW pg_publication_sequences AS
+    SELECT
+        P.pubname AS pubname,
+        N.nspname AS schemaname,
+        C.relname AS sequencename
+    FROM pg_publication P,
+         LATERAL pg_get_publication_sequences(P.pubname) GPS,
+         pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+    WHERE C.oid = GPS.relid;
+
 CREATE VIEW pg_locks AS
     SELECT * FROM pg_lock_status() AS L;
 
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index f4fc17acbe1..e85227da758 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -847,11 +847,14 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 		aclcheck_error(aclresult, OBJECT_DATABASE,
 					   get_database_name(MyDatabaseId));
 
-	/* FOR ALL TABLES requires superuser */
-	if (stmt->for_all_tables && !superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 errmsg("must be superuser to create FOR ALL TABLES publication")));
+	/* FOR ALL TABLES and FOR ALL SEQUENCES requires superuser */
+	if (!superuser())
+	{
+		if (stmt->for_all_tables || stmt->for_all_sequences)
+			ereport(ERROR,
+					errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+					errmsg("must be superuser to create a FOR ALL TABLES or ALL SEQUENCES publication"));
+	}
 
 	rel = table_open(PublicationRelationId, RowExclusiveLock);
 
@@ -880,11 +883,20 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 							  &publish_generated_columns_given,
 							  &publish_generated_columns);
 
+	if (stmt->for_all_sequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored for sequences"));
+
 	puboid = GetNewOidWithIndex(rel, PublicationObjectIndexId,
 								Anum_pg_publication_oid);
 	values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
 	values[Anum_pg_publication_puballtables - 1] =
 		BoolGetDatum(stmt->for_all_tables);
+	values[Anum_pg_publication_puballsequences - 1] =
+		BoolGetDatum(stmt->for_all_sequences);
 	values[Anum_pg_publication_pubinsert - 1] =
 		BoolGetDatum(pubactions.pubinsert);
 	values[Anum_pg_publication_pubupdate - 1] =
@@ -914,10 +926,14 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
 	/* Associate objects with the publication. */
 	if (stmt->for_all_tables)
 	{
-		/* Invalidate relcache so that publication info is rebuilt. */
+		/*
+		 * Invalidate relcache so that publication info is rebuilt. Sequences
+		 * publication don't require invalidation as incremental sync isn't
+		 * supported for sequences and replica identity checks don't apply.
+		 */
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!stmt->for_all_sequences)
 	{
 		ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
 								   &schemaidlist);
@@ -989,6 +1005,8 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	List	   *root_relids = NIL;
 	ListCell   *lc;
 
+	pubform = (Form_pg_publication) GETSTRUCT(tup);
+
 	parse_publication_options(pstate,
 							  stmt->options,
 							  &publish_given, &pubactions,
@@ -997,7 +1015,12 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 							  &publish_generated_columns_given,
 							  &publish_generated_columns);
 
-	pubform = (Form_pg_publication) GETSTRUCT(tup);
+	if (pubform->puballsequences &&
+		(publish_given || publish_via_partition_root_given ||
+		 publish_generated_columns_given))
+		ereport(NOTICE,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("publication parameters are not applicable to sequence synchronization and will be ignored for sequences"));
 
 	/*
 	 * If the publication doesn't publish changes via the root partitioned
@@ -1451,20 +1474,50 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
 	 * Check that user is allowed to manipulate the publication tables in
 	 * schema
 	 */
-	if (schemaidlist && pubform->puballtables)
-		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications.")));
+	if (schemaidlist && (pubform->puballtables || pubform->puballsequences))
+	{
+		if (pubform->puballtables && pubform->puballsequences)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+						   NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES, ALL SEQUENCES publications."));
+		else if (pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+						   NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL TABLES publications."));
+		else
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+						   NameStr(pubform->pubname)),
+					errdetail("Schemas cannot be added to or dropped from FOR ALL SEQUENCES publications."));
+	}
 
 	/* Check that user is allowed to manipulate the publication tables. */
-	if (tables && pubform->puballtables)
-		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("publication \"%s\" is defined as FOR ALL TABLES",
-						NameStr(pubform->pubname)),
-				 errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+	if (tables && (pubform->puballtables || pubform->puballsequences))
+	{
+		if (pubform->puballtables && pubform->puballsequences)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES, ALL SEQUENCES",
+						   NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL TABLES, ALL SEQUENCES publications."));
+		else if (pubform->puballtables)
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL TABLES",
+						   NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications."));
+		else
+			ereport(ERROR,
+					errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+					errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+						   NameStr(pubform->pubname)),
+					errdetail("Tables or sequences cannot be added to or dropped from FOR ALL SEQUENCES publications."));
+	}
 }
 
 /*
@@ -2014,19 +2067,16 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			aclcheck_error(aclresult, OBJECT_DATABASE,
 						   get_database_name(MyDatabaseId));
 
-		if (form->puballtables && !superuser_arg(newOwnerId))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
-
-		if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
-			ereport(ERROR,
-					(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-					 errmsg("permission denied to change owner of publication \"%s\"",
-							NameStr(form->pubname)),
-					 errhint("The owner of a FOR TABLES IN SCHEMA publication must be a superuser.")));
+		if (!superuser_arg(newOwnerId))
+		{
+			if (form->puballtables || form->puballsequences ||
+				is_schema_publication(form->oid))
+				ereport(ERROR,
+						errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+						errmsg("permission denied to change owner of publication \"%s\"",
+							   NameStr(form->pubname)),
+						errhint("The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser."));
+		}
 	}
 
 	form->pubowner = newOwnerId;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 57bf7a7c7f2..21caf2d43bf 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -202,6 +202,10 @@ static void processCASbits(int cas_bits, int location, const char *constrType,
 			   bool *not_valid, bool *no_inherit, core_yyscan_t yyscanner);
 static PartitionStrategy parsePartitionStrategy(char *strategy, int location,
 												core_yyscan_t yyscanner);
+static void preprocess_pub_all_objtype_list(List *all_objects_list,
+											bool *all_tables,
+											bool *all_sequences,
+											core_yyscan_t yyscanner);
 static void preprocess_pubobj_list(List *pubobjspec_list,
 								   core_yyscan_t yyscanner);
 static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
@@ -260,6 +264,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionBoundSpec *partboundspec;
 	RoleSpec   *rolespec;
 	PublicationObjSpec *publicationobjectspec;
+	PublicationAllObjSpec *publicationallobjectspec;
 	struct SelectLimit *selectlimit;
 	SetQuantifier setquantifier;
 	struct GroupClause *groupclause;
@@ -447,7 +452,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list
+				drop_option_list pub_obj_list pub_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -585,6 +590,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	var_value zone_value
 %type <rolespec> auth_ident RoleSpec opt_granted_by
 %type <publicationobjectspec> PublicationObjSpec
+%type <publicationallobjectspec> PublicationAllObjSpec
 
 %type <keyword> unreserved_keyword type_func_name_keyword
 %type <keyword> col_name_keyword reserved_keyword
@@ -10704,7 +10710,12 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL TABLES [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *
+ *		TABLES
+ *		SEQUENCES
  *
  * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
  *
@@ -10724,13 +10735,16 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR ALL TABLES opt_definition
+			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->options = $7;
-					n->for_all_tables = true;
+					n->pubobjects = (List *) $5;
+					preprocess_pub_all_objtype_list($5, &n->for_all_tables,
+													&n->for_all_sequences,
+													yyscanner);
+					n->options = $6;
 					$$ = (Node *) n;
 				}
 			| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -10842,6 +10856,28 @@ pub_obj_list:	PublicationObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
+PublicationAllObjSpec:
+				ALL TABLES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_TABLES;
+						$$->location = @1;
+					}
+				| ALL SEQUENCES
+					{
+						$$ = makeNode(PublicationAllObjSpec);
+						$$->pubobjtype = PUBLICATION_ALL_SEQUENCES;
+						$$->location = @1;
+					}
+					;
+
+pub_obj_type_list:	PublicationAllObjSpec
+					{ $$ = list_make1($1); }
+				| pub_obj_type_list ',' PublicationAllObjSpec
+					{ $$ = lappend($1, $3); }
+	;
+
+
 /*****************************************************************************
  *
  * ALTER PUBLICATION name SET ( options )
@@ -19639,6 +19675,47 @@ parsePartitionStrategy(char *strategy, int location, core_yyscan_t yyscanner)
 
 }
 
+/*
+ * Process all_objects_list to set all_tables and/or all_sequences.
+ * Also, checks if the pub_object_type has been specified more than once.
+ */
+static void
+preprocess_pub_all_objtype_list(List *all_objects_list, bool *all_tables,
+								bool *all_sequences, core_yyscan_t yyscanner)
+{
+	if (!all_objects_list)
+		return;
+
+	*all_tables = false;
+	*all_sequences = false;
+
+	foreach_ptr(PublicationAllObjSpec, obj, all_objects_list)
+	{
+		if (obj->pubobjtype == PUBLICATION_ALL_TABLES)
+		{
+			if (*all_tables)
+				ereport(ERROR,
+						errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL TABLES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_tables = true;
+		}
+		else if (obj->pubobjtype == PUBLICATION_ALL_SEQUENCES)
+		{
+			if (*all_sequences)
+				ereport(ERROR,
+					errcode(ERRCODE_SYNTAX_ERROR),
+						errmsg("invalid publication object list"),
+						errdetail("ALL SEQUENCES can be specified only once."),
+						parser_errposition(obj->location));
+
+			*all_sequences = true;
+		}
+	}
+}
+
 /*
  * Process pubobjspec_list to check for errors in any of the objects and
  * convert PUBLICATIONOBJ_CONTINUATION into appropriate PublicationObjSpecType.
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9fc3671cb35..641bece12c7 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4531,6 +4531,7 @@ getPublications(Archive *fout)
 	int			i_pubname;
 	int			i_pubowner;
 	int			i_puballtables;
+	int			i_puballsequences;
 	int			i_pubinsert;
 	int			i_pubupdate;
 	int			i_pubdelete;
@@ -4561,9 +4562,14 @@ getPublications(Archive *fout)
 		appendPQExpBufferStr(query, "false AS pubviaroot, ");
 
 	if (fout->remoteVersion >= 180000)
-		appendPQExpBufferStr(query, "p.pubgencols ");
+		appendPQExpBufferStr(query, "p.pubgencols, ");
 	else
-		appendPQExpBuffer(query, "'%c' AS pubgencols ", PUBLISH_GENCOLS_NONE);
+		appendPQExpBuffer(query, "'%c' AS pubgencols, ", PUBLISH_GENCOLS_NONE);
+
+	if (fout->remoteVersion >= 190000)
+		appendPQExpBufferStr(query, "p.puballsequences ");
+	else
+		appendPQExpBufferStr(query, "false AS puballsequences ");
 
 	appendPQExpBufferStr(query, "FROM pg_publication p");
 
@@ -4579,6 +4585,7 @@ getPublications(Archive *fout)
 	i_pubname = PQfnumber(res, "pubname");
 	i_pubowner = PQfnumber(res, "pubowner");
 	i_puballtables = PQfnumber(res, "puballtables");
+	i_puballsequences = PQfnumber(res, "puballsequences");
 	i_pubinsert = PQfnumber(res, "pubinsert");
 	i_pubupdate = PQfnumber(res, "pubupdate");
 	i_pubdelete = PQfnumber(res, "pubdelete");
@@ -4599,6 +4606,8 @@ getPublications(Archive *fout)
 		pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
 		pubinfo[i].puballtables =
 			(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+		pubinfo[i].puballsequences =
+			(strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
 		pubinfo[i].pubinsert =
 			(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
 		pubinfo[i].pubupdate =
@@ -4650,8 +4659,12 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
 	appendPQExpBuffer(query, "CREATE PUBLICATION %s",
 					  qpubname);
 
-	if (pubinfo->puballtables)
+	if (pubinfo->puballtables && pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL TABLES, ALL SEQUENCES");
+	else if (pubinfo->puballtables)
 		appendPQExpBufferStr(query, " FOR ALL TABLES");
+	else if (pubinfo->puballsequences)
+		appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
 
 	appendPQExpBufferStr(query, " WITH (publish = '");
 	if (pubinfo->pubinsert)
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index bcc94ff07cc..fa6d1a510f7 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -669,6 +669,7 @@ typedef struct _PublicationInfo
 	DumpableObject dobj;
 	const char *rolname;
 	bool		puballtables;
+	bool		puballsequences;
 	bool		pubinsert;
 	bool		pubupdate;
 	bool		pubdelete;
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index fc5b9b52f80..dee45e4eaf6 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -3432,6 +3432,27 @@ my %tests = (
 		like => { %full_runs, section_post_data => 1, },
 	},
 
+	'CREATE PUBLICATION pub6' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub6
+						 FOR ALL SEQUENCES;',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub6 FOR ALL SEQUENCES WITH (publish = 'insert, update, delete, truncate');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
+	'CREATE PUBLICATION pub7' => {
+		create_order => 50,
+		create_sql => 'CREATE PUBLICATION pub7
+						 FOR ALL SEQUENCES, ALL TABLES
+						 WITH (publish = \'\');',
+		regexp => qr/^
+			\QCREATE PUBLICATION pub7 FOR ALL TABLES, ALL SEQUENCES WITH (publish = '');\E
+			/xm,
+		like => { %full_runs, section_post_data => 1, },
+	},
+
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
 		create_sql => 'CREATE SUBSCRIPTION sub1
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 4aa793d7de7..36f24502842 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1759,7 +1759,7 @@ describeOneTableDetails(const char *schemaname,
 	{
 		PGresult   *result = NULL;
 		printQueryOpt myopt = pset.popt;
-		char	   *footers[2] = {NULL, NULL};
+		char	   *footers[3] = {NULL, NULL, NULL};
 
 		if (pset.sversion >= 100000)
 		{
@@ -1855,6 +1855,39 @@ describeOneTableDetails(const char *schemaname,
 		}
 		PQclear(result);
 
+		/* Print any publications */
+		if (pset.sversion >= 190000)
+		{
+			printfPQExpBuffer(&buf, "SELECT pubname FROM pg_catalog.pg_publication p"
+							  "\nWHERE p.puballsequences"
+							  "\n AND pg_catalog.pg_relation_is_publishable('%s')"
+							  "\nORDER BY 1",
+							  oid);
+
+			result = PSQLexec(buf.data);
+			if (result)
+			{
+				int			nrows = PQntuples(result);
+
+				if (nrows > 0)
+				{
+					printfPQExpBuffer(&tmpbuf, _("Publications:"));
+					for (i = 0; i < nrows; i++)
+						appendPQExpBuffer(&tmpbuf, "\n    \"%s\"", PQgetvalue(result, i, 0));
+
+					/* Store in the first available footer slot */
+					if (footers[0] == NULL)
+						footers[0] = pg_strdup(tmpbuf.data);
+					else
+						footers[1] = pg_strdup(tmpbuf.data);
+
+					resetPQExpBuffer(&tmpbuf);
+				}
+
+				PQclear(result);
+			}
+		}
+
 		if (tableinfo.relpersistence == RELPERSISTENCE_UNLOGGED)
 			printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
 							  schemaname, relationname);
@@ -1870,6 +1903,7 @@ describeOneTableDetails(const char *schemaname,
 		printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
 
 		free(footers[0]);
+		free(footers[1]);
 
 		retval = true;
 		goto error_return;		/* not an error, just return early */
@@ -6398,7 +6432,7 @@ listPublications(const char *pattern)
 	PQExpBufferData buf;
 	PGresult   *res;
 	printQueryOpt myopt = pset.popt;
-	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false};
+	static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
 
 	if (pset.sversion < 100000)
 	{
@@ -6415,13 +6449,20 @@ listPublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT pubname AS \"%s\",\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
-					  "  puballtables AS \"%s\",\n"
-					  "  pubinsert AS \"%s\",\n"
-					  "  pubupdate AS \"%s\",\n"
-					  "  pubdelete AS \"%s\"",
+					  "  puballtables AS \"%s\"",
 					  gettext_noop("Name"),
 					  gettext_noop("Owner"),
-					  gettext_noop("All tables"),
+					  gettext_noop("All tables"));
+
+	if (pset.sversion >= 190000)
+		appendPQExpBuffer(&buf,
+						  ",\n  puballsequences AS \"%s\"",
+						  gettext_noop("All sequences"));
+
+	appendPQExpBuffer(&buf,
+					  ",\n  pubinsert AS \"%s\",\n"
+					  "  pubupdate AS \"%s\",\n"
+					  "  pubdelete AS \"%s\"",
 					  gettext_noop("Inserts"),
 					  gettext_noop("Updates"),
 					  gettext_noop("Deletes"));
@@ -6532,6 +6573,7 @@ describePublications(const char *pattern)
 	bool		has_pubtruncate;
 	bool		has_pubgencols;
 	bool		has_pubviaroot;
+	bool		has_pubsequence;
 
 	PQExpBufferData title;
 	printTableContent cont;
@@ -6546,6 +6588,7 @@ describePublications(const char *pattern)
 		return true;
 	}
 
+	has_pubsequence = (pset.sversion >= 190000);
 	has_pubtruncate = (pset.sversion >= 110000);
 	has_pubgencols = (pset.sversion >= 180000);
 	has_pubviaroot = (pset.sversion >= 130000);
@@ -6555,7 +6598,18 @@ describePublications(const char *pattern)
 	printfPQExpBuffer(&buf,
 					  "SELECT oid, pubname,\n"
 					  "  pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
-					  "  puballtables, pubinsert, pubupdate, pubdelete");
+					  "  puballtables");
+
+	if (has_pubsequence)
+		appendPQExpBufferStr(&buf,
+							 ", puballsequences");
+	else
+		appendPQExpBufferStr(&buf,
+							 ", false AS puballsequences");
+
+	appendPQExpBufferStr(&buf,
+						 ", pubinsert, pubupdate, pubdelete");
+
 	if (has_pubtruncate)
 		appendPQExpBufferStr(&buf,
 							 ", pubtruncate");
@@ -6630,6 +6684,8 @@ describePublications(const char *pattern)
 		bool		puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
 		printTableOpt myopt = pset.popt.topt;
 
+		if (has_pubsequence)
+			ncols++;
 		if (has_pubtruncate)
 			ncols++;
 		if (has_pubgencols)
@@ -6643,6 +6699,8 @@ describePublications(const char *pattern)
 
 		printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
 		printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+		if (has_pubsequence)
+			printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
 		printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
@@ -6655,15 +6713,17 @@ describePublications(const char *pattern)
 
 		printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
-		printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+		if (has_pubsequence)
+			printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
 		printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+		printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
 		if (has_pubtruncate)
-			printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
-		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
-		if (has_pubviaroot)
+		if (has_pubgencols)
 			printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);
+		if (has_pubviaroot)
+			printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false);
 
 		if (!puballtables)
 		{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 6176741d20b..64bfd309c9a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -3585,11 +3585,11 @@ match_previous_words(int pattern_id,
 
 /* CREATE PUBLICATION */
 	else if (Matches("CREATE", "PUBLICATION", MatchAny))
-		COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR TABLES IN SCHEMA", "WITH (");
+		COMPLETE_WITH("FOR TABLE", "FOR TABLES IN SCHEMA", "FOR ALL TABLES", "FOR ALL SEQUENCES", "WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
-		COMPLETE_WITH("TABLE", "ALL TABLES", "TABLES IN SCHEMA");
+		COMPLETE_WITH("TABLE", "TABLES IN SCHEMA", "ALL TABLES", "ALL SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
-		COMPLETE_WITH("TABLES");
+		COMPLETE_WITH("TABLES", "SEQUENCES");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
 		COMPLETE_WITH("WITH (");
 	else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLES"))
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 6bb31892d1d..6530f3a330c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -12302,6 +12302,11 @@
   proargmodes => '{v,o,o,o,o}',
   proargnames => '{pubname,pubid,relid,attrs,qual}',
   prosrc => 'pg_get_publication_tables' },
+{ oid => '8052', descr => 'get OIDs of sequences in a publication',
+  proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+  provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+  proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+  proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
 { oid => '6121',
   descr => 'returns whether a relation can be part of a publication',
   proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 6e074190fd2..22f48bb8975 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
 	 */
 	bool		puballtables;
 
+	/*
+	 * indicates that this is special publication which should encompass all
+	 * sequences in the database (except for the unlogged and temp ones)
+	 */
+	bool		puballsequences;
+
 	/* true if inserts are published */
 	bool		pubinsert;
 
@@ -129,6 +135,7 @@ typedef struct Publication
 	Oid			oid;
 	char	   *name;
 	bool		alltables;
+	bool		allsequences;
 	bool		pubviaroot;
 	PublishGencolsType pubgencols_type;
 	PublicationActions pubactions;
@@ -163,7 +170,7 @@ typedef enum PublicationPartOpt
 
 extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
 extern List *GetAllTablesPublications(void);
-extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllPublicationRelations(char relkind, bool pubviaroot);
 extern List *GetPublicationSchemas(Oid pubid);
 extern List *GetSchemaPublications(Oid schemaid);
 extern List *GetSchemaPublicationRelations(Oid schemaid,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 87c1086ec99..dc09d1a3f03 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4294,6 +4294,22 @@ typedef struct PublicationObjSpec
 	ParseLoc	location;		/* token location, or -1 if unknown */
 } PublicationObjSpec;
 
+/*
+ * Types of objects supported by FOR ALL publications
+ */
+typedef enum PublicationAllObjType
+{
+	PUBLICATION_ALL_TABLES,
+	PUBLICATION_ALL_SEQUENCES,
+} PublicationAllObjType;
+
+typedef struct PublicationAllObjSpec
+{
+	NodeTag		type;
+	PublicationAllObjType pubobjtype;	/* type of this publication object */
+	ParseLoc	location;		/* token location, or -1 if unknown */
+} PublicationAllObjSpec;
+
 typedef struct CreatePublicationStmt
 {
 	NodeTag		type;
@@ -4301,6 +4317,8 @@ typedef struct CreatePublicationStmt
 	List	   *options;		/* List of DefElem nodes */
 	List	   *pubobjects;		/* Optional list of publication objects */
 	bool		for_all_tables; /* Special publication for all tables in db */
+	bool		for_all_sequences;	/* Special publication for all sequences
+									 * in db */
 } CreatePublicationStmt;
 
 typedef enum AlterPublicationAction
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index a79325e8a2f..fa8984ffe0d 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -6445,9 +6445,9 @@ List of schemas
 (0 rows)
 
 \dRp "no.such.publication"
-                                        List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
-------+-------+------------+---------+---------+---------+-----------+-------------------+----------
+                                                List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+------+-------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
 (0 rows)
 
 \dRs "no.such.subscription"
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 895ca87a0df..e72d1308967 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -40,20 +40,20 @@ CREATE PUBLICATION testpub_xxx WITH (publish_generated_columns);
 ERROR:  invalid value for publication parameter "publish_generated_columns": ""
 DETAIL:  Valid values are "none" and "stored".
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | f       | t       | f       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | f       | t       | f       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
 \dRp
-                                                        List of publications
-        Name        |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default    | regress_publication_user | f          | t       | t       | t       | f         | none              | f
- testpub_ins_trunct | regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                                List of publications
+        Name        |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default    | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
+ testpub_ins_trunct | regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 (2 rows)
 
 --- adding tables
@@ -70,15 +70,15 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't drop from all tables publication
 ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add to for all tables publication
 ALTER PUBLICATION testpub_foralltables SET TABLE pub_test.testpub_nopk;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
-DETAIL:  Tables cannot be added to or dropped from FOR ALL TABLES publications.
+DETAIL:  Tables or sequences cannot be added to or dropped from FOR ALL TABLES publications.
 -- fail - can't add schema to 'FOR ALL TABLES' publication
 ALTER PUBLICATION testpub_foralltables ADD TABLES IN SCHEMA pub_test;
 ERROR:  publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -97,10 +97,10 @@ RESET client_min_messages;
 -- should be able to add schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 Tables from schemas:
@@ -109,20 +109,20 @@ Tables from schemas:
 -- should be able to drop schema from 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl1"
 
 -- should be able to set schema to 'FOR TABLE' publication
 ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -133,10 +133,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;
 CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;
 RESET client_min_messages;
 \dRp+ testpub_for_tbl_schema
-                                       Publication testpub_for_tbl_schema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                               Publication testpub_for_tbl_schema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -154,10 +154,10 @@ LINE 1: ...CATION testpub_parsertst FOR TABLES IN SCHEMA foo, test.foo;
 -- should be able to add a table of the same schema to the schema publication
 ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 Tables from schemas:
@@ -166,10 +166,10 @@ Tables from schemas:
 -- should be able to drop the table
 ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test"
 
@@ -180,10 +180,10 @@ ERROR:  relation "testpub_nopk" is not part of the publication
 -- should be able to set table to schema publication
 ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
 \dRp+ testpub_forschema
-                                         Publication testpub_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
 
@@ -207,10 +207,10 @@ Not-null constraints:
     "testpub_tbl2_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_foralltables
-                                        Publication testpub_foralltables
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | f       | f         | none              | f
+                                                Publication testpub_foralltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | f       | f         | none              | f
 (1 row)
 
 DROP TABLE testpub_tbl2;
@@ -222,24 +222,110 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
 CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
 RESET client_min_messages;
 \dRp+ testpub3
-                                              Publication testpub3
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub3
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
     "public.testpub_tbl3a"
 
 \dRp+ testpub4
-                                              Publication testpub4
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub4
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl3"
 
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+           pubname            | puballtables | puballsequences 
+------------------------------+--------------+-----------------
+ regress_pub_forallsequences1 | f            | t
+(1 row)
+
+\d+ regress_pub_seq0
+                      Sequence "public.regress_pub_seq0"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+
+\dRp+ regress_pub_forallsequences1
+                                            Publication regress_pub_forallsequences1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+                     Sequence "pub_test.regress_pub_seq1"
+  Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache 
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
+Publications:
+    "regress_pub_forallsequences1"
+    "regress_pub_forallsequences2"
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+-- Specifying WITH clause in an ALL SEQUENCES publication will emit a NOTICE.
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withclause FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+NOTICE:  publication parameters are not applicable to sequence synchronization and will be ignored for sequences
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+NOTICE:  publication parameters are not applicable to sequence synchronization and will be ignored for sequences
+WARNING:  "wal_level" is insufficient to publish logical changes
+HINT:  Set "wal_level" to "logical" before creating subscriptions.
+RESET client_min_messages;
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+                pubname                 | puballtables | puballsequences 
+----------------------------------------+--------------+-----------------
+ regress_pub_for_allsequences_alltables | t            | t
+(1 row)
+
+\dRp+ regress_pub_for_allsequences_alltables
+                                       Publication regress_pub_for_allsequences_alltables
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | t             | t       | t       | t       | t         | none              | f
+(1 row)
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1;
+DROP PUBLICATION regress_pub_forallsequences2;
+DROP PUBLICATION regress_pub_for_allsequences_alltables;
+DROP PUBLICATION regress_pub_for_allsequences_alltables_withclause;
+DROP PUBLICATION regress_pub_for_allsequences_withclause;
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES...
+                                                             ^
+DETAIL:  ALL TABLES can be specified only once.
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+ERROR:  invalid publication object list
+LINE 1: ...equences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUEN...
+                                                             ^
+DETAIL:  ALL SEQUENCES can be specified only once.
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
@@ -255,10 +341,10 @@ UPDATE testpub_parted1 SET a = 1;
 -- only parent is listed as being in publication, not the partition
 ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_parted"
 
@@ -273,10 +359,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
 UPDATE testpub_parted1 SET a = 1;
 ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
 \dRp+ testpub_forparted
-                                         Publication testpub_forparted
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | t
+                                                 Publication testpub_forparted
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | t
 Tables:
     "public.testpub_parted"
 
@@ -305,10 +391,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -321,10 +407,10 @@ Tables:
 
 ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -340,10 +426,10 @@ Publications:
 
 ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -351,10 +437,10 @@ Tables:
 -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
 ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
 \dRp+ testpub5
-                                              Publication testpub5
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                      Publication testpub5
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
 
@@ -387,10 +473,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax1
-                                          Publication testpub_syntax1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "public.testpub_rf_tbl3" WHERE (e < 999)
@@ -400,10 +486,10 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
 RESET client_min_messages;
 \dRp+ testpub_syntax2
-                                          Publication testpub_syntax2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | f         | none              | f
+                                                  Publication testpub_syntax2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | f         | none              | f
 Tables:
     "public.testpub_rf_tbl1"
     "testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -518,10 +604,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;
 ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);
 RESET client_min_messages;
 \dRp+ testpub6
-                                              Publication testpub6
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                      Publication testpub6
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "testpub_rf_schema2.testpub_rf_tbl6" WHERE (i < 99)
 Tables from schemas:
@@ -813,10 +899,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
 RESET client_min_messages;
 ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);		-- ok
 \dRp+ testpub_table_ins
-                                         Publication testpub_table_ins
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | f       | f       | t         | none              | f
+                                                 Publication testpub_table_ins
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | f       | f       | t         | none              | f
 Tables:
     "public.testpub_tbl5" (a)
 
@@ -1006,10 +1092,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
 ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
 ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
 \dRp+ testpub_both_filters
-                                        Publication testpub_both_filters
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                Publication testpub_both_filters
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
 
@@ -1217,10 +1303,10 @@ ERROR:  relation "testpub_tbl1" is already member of publication "testpub_fortbl
 CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
 ERROR:  publication "testpub_fortbl" already exists
 \dRp+ testpub_fortbl
-                                           Publication testpub_fortbl
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                   Publication testpub_fortbl
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1260,10 +1346,10 @@ Not-null constraints:
     "testpub_tbl1_id_not_null" NOT NULL "id"
 
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 Tables:
     "pub_test.testpub_nopk"
     "public.testpub_tbl1"
@@ -1334,7 +1420,7 @@ SET ROLE regress_publication_user3;
 -- fail - new owner must be superuser
 ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail
 ERROR:  permission denied to change owner of publication "testpub4"
-HINT:  The owner of a FOR TABLES IN SCHEMA publication must be a superuser.
+HINT:  The owner of a FOR ALL TABLES or ALL SEQUENCES or TABLES IN SCHEMA publication must be a superuser.
 ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok
 SET ROLE regress_publication_user;
 DROP PUBLICATION testpub4;
@@ -1343,10 +1429,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
 DROP TABLE testpub_parted;
 DROP TABLE testpub_tbl1;
 \dRp+ testpub_default
-                                          Publication testpub_default
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                  Publication testpub_default
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- fail - must be owner of publication
@@ -1356,20 +1442,20 @@ ERROR:  must be owner of publication testpub_default
 RESET ROLE;
 ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
 \dRp testpub_foo
-                                                     List of publications
-    Name     |          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
--------------+--------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_foo | regress_publication_user | f          | t       | t       | t       | f         | none              | f
+                                                             List of publications
+    Name     |          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_foo | regress_publication_user | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- rename back to keep the rest simple
 ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
 ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
 \dRp testpub_default
-                                                       List of publications
-      Name       |           Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
------------------+---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- testpub_default | regress_publication_user2 | f          | t       | t       | t       | f         | none              | f
+                                                               List of publications
+      Name       |           Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ testpub_default | regress_publication_user2 | f          | f             | t       | t       | t       | f         | none              | f
 (1 row)
 
 -- adding schemas and tables
@@ -1385,19 +1471,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1411,44 +1497,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA "CURRENT_SCHEMA", CUR
 CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "public"
 
 \dRp+ testpub4_forschema
-                                         Publication testpub4_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub4_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
 
 \dRp+ testpub5_forschema
-                                         Publication testpub5_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub5_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub6_forschema
-                                         Publication testpub6_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub6_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "CURRENT_SCHEMA"
     "public"
 
 \dRp+ testpub_fortable
-                                          Publication testpub_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                  Publication testpub_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "CURRENT_SCHEMA.CURRENT_SCHEMA"
 
@@ -1482,10 +1568,10 @@ ERROR:  schema "testpub_view" does not exist
 -- dropping the schema should reflect the change in publication
 DROP SCHEMA pub_test3;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1493,20 +1579,20 @@ Tables from schemas:
 -- renaming the schema should reflect the change in publication
 ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1_renamed"
     "pub_test2"
 
 ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
 \dRp+ testpub2_forschema
-                                         Publication testpub2_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub2_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1514,10 +1600,10 @@ Tables from schemas:
 -- alter publication add schema
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1526,10 +1612,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1538,10 +1624,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;
 ERROR:  schema "pub_test1" is already member of publication "testpub1_forschema"
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1549,10 +1635,10 @@ Tables from schemas:
 -- alter publication drop schema
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1560,10 +1646,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;
 ERROR:  tables from schema "pub_test2" are not part of the publication
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1571,29 +1657,29 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
 -- drop all schemas
 ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 -- alter publication set multiple schema
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1602,10 +1688,10 @@ Tables from schemas:
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;
 ERROR:  schema "non_existent_schema" does not exist
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
     "pub_test2"
@@ -1614,10 +1700,10 @@ Tables from schemas:
 -- removing the duplicate schemas
 ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;
 \dRp+ testpub1_forschema
-                                         Publication testpub1_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub1_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1696,18 +1782,18 @@ SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub3_forschema;
 RESET client_min_messages;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;
 \dRp+ testpub3_forschema
-                                         Publication testpub3_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                 Publication testpub3_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables from schemas:
     "pub_test1"
 
@@ -1717,20 +1803,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA
 CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;
 RESET client_min_messages;
 \dRp+ testpub_forschema_fortable
-                                     Publication testpub_forschema_fortable
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_forschema_fortable
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
     "pub_test1"
 
 \dRp+ testpub_fortable_forschema
-                                     Publication testpub_fortable_forschema
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                             Publication testpub_fortable_forschema
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "pub_test2.tbl1"
 Tables from schemas:
@@ -1851,18 +1937,18 @@ DROP SCHEMA sch2 cascade;
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = stored);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | stored            | f
 (1 row)
 
 CREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | t          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | t          | f             | t       | t       | t       | t         | none              | f
 (1 row)
 
 DROP PUBLICATION pub1;
@@ -1873,50 +1959,50 @@ CREATE TABLE gencols (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);
 -- Generated columns in column list, when 'publish_generated_columns'='none'
 CREATE PUBLICATION pub1 FOR table gencols(a, gen1) WITH (publish_generated_columns = none);
 \dRp+ pub1
-                                                Publication pub1
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub1
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, when 'publish_generated_columns'='stored'
 CREATE PUBLICATION pub2 FOR table gencols(a, gen1) WITH (publish_generated_columns = stored);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | stored            | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | stored            | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Generated columns in column list, then set 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET (publish_generated_columns = none);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
 -- Remove generated columns from column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a)
 
 -- Add generated columns in column list, when 'publish_generated_columns'='none'
 ALTER PUBLICATION pub2 SET TABLE gencols(a, gen1);
 \dRp+ pub2
-                                                Publication pub2
-          Owner           | All tables | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
---------------------------+------------+---------+---------+---------+-----------+-------------------+----------
- regress_publication_user | f          | t       | t       | t       | t         | none              | f
+                                                        Publication pub2
+          Owner           | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Generated columns | Via root 
+--------------------------+------------+---------------+---------+---------+---------+-----------+-------------------+----------
+ regress_publication_user | f          | f             | t       | t       | t       | t         | none              | f
 Tables:
     "public.gencols" (a, gen1)
 
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 8859a5a885f..3d09793c97c 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1462,6 +1462,14 @@ pg_prepared_xacts| SELECT p.transaction,
    FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
      LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
      LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+    n.nspname AS schemaname,
+    c.relname AS sequencename
+   FROM pg_publication p,
+    LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+    (pg_class c
+     JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+  WHERE (c.oid = gps.relid);
 pg_publication_tables| SELECT p.pubname,
     n.nspname AS schemaname,
     c.relname AS tablename,
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f423061395..00390aecd47 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -120,6 +120,52 @@ RESET client_min_messages;
 DROP TABLE testpub_tbl3, testpub_tbl3a;
 DROP PUBLICATION testpub3, testpub4;
 
+--- Tests for publications with SEQUENCES
+CREATE SEQUENCE regress_pub_seq0;
+CREATE SEQUENCE pub_test.regress_pub_seq1;
+
+-- FOR ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences1 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_forallsequences1';
+\d+ regress_pub_seq0
+\dRp+ regress_pub_forallsequences1
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_forallsequences2 FOR ALL SEQUENCES;
+RESET client_min_messages;
+
+-- check that describe sequence lists both publications the sequence belongs to
+\d+ pub_test.regress_pub_seq1
+
+--- Specifying both ALL TABLES and ALL SEQUENCES
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES;
+
+-- Specifying WITH clause in an ALL SEQUENCES publication will emit a NOTICE.
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withclause FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL SEQUENCES WITH (publish_generated_columns = 'stored');
+RESET client_min_messages;
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'regress_pub_for_allsequences_alltables';
+\dRp+ regress_pub_for_allsequences_alltables
+
+DROP SEQUENCE regress_pub_seq0, pub_test.regress_pub_seq1;
+DROP PUBLICATION regress_pub_forallsequences1;
+DROP PUBLICATION regress_pub_forallsequences2;
+DROP PUBLICATION regress_pub_for_allsequences_alltables;
+DROP PUBLICATION regress_pub_for_allsequences_alltables_withclause;
+DROP PUBLICATION regress_pub_for_allsequences_withclause;
+
+-- fail - Specifying ALL TABLES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL TABLES;
+
+-- fail - Specifying ALL SEQUENCES more than once
+CREATE PUBLICATION regress_pub_for_allsequences_alltables FOR ALL SEQUENCES, ALL TABLES, ALL SEQUENCES;
+
 -- Tests for partitioned tables
 SET client_min_messages = 'ERROR';
 CREATE PUBLICATION testpub_forparted;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 02b5b041c45..5290b91e83e 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2354,6 +2354,8 @@ PsqlScanStateData
 PsqlSettings
 Publication
 PublicationActions
+PublicationAllObjSpec
+PublicationAllObjType
 PublicationDesc
 PublicationInfo
 PublicationObjSpec
-- 
2.43.0

#364shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#363)
Re: Logical Replication of sequences

On Wed, Oct 8, 2025 at 3:50 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the comments, here is an updated version with a fix to handle this.

Thanks. The patch looks good to me.

thanks
Shveta

#365Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#356)
Re: Logical Replication of sequences

On Wed, Oct 8, 2025 at 9:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 5:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have one more question: while testing the sequence sync, I found
this behavior is documented as well[1], but what's the reasoning
behind it? Why REFRESH PUBLICATION will synchronize only newly added
sequences and need to use REFRESH PUBLICATION SEQUENCES to
re-synchronize all sequences.

The idea is that REFRESH PUBLICATION should behave similarly for
tables and sequences. This means that this command is primarily used
to add/remove tables/sequences and copy their respective initial
contents. The new command REFRESH PUBLICATION SEQUENCES is to sync the
existing sequences, it shouldn't add any new sequences, now, if it is
too confusing we can discuss having a different syntax for it.

Sure, let's discuss this when we get this patch at the start of the
commit queue.

I have pushed the publications related patch. Now, we can discuss this
command. I think confusion arises from the fact that both commands use
REFRESH. So, how about for the second case (sync/copy all existing
sequences), we use a different command, some ideas that come to my
mind are:

Alter Subscription sub1 REPLICATE Publication Sequences;
Alter Subscription sub1 RESYNC Publication Sequences;
Alter Subscription sub1 SYNC Publication Sequences;
Alter Subscription sub1 MERGE Publication Sequences;

Among these, the first three require a new keyword to be introduced. I
prefer to use existing keyword if possible. Any ideas?

--
With Regards,
Amit Kapila.

#366Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Amit Kapila (#365)
RE: Logical Replication of sequences

Dear Amit, Dilip,

I have pushed the publications related patch. Now, we can discuss this
command. I think confusion arises from the fact that both commands use
REFRESH. So, how about for the second case (sync/copy all existing
sequences), we use a different command, some ideas that come to my
mind are:

Alter Subscription sub1 REPLICATE Publication Sequences;
Alter Subscription sub1 RESYNC Publication Sequences;
Alter Subscription sub1 SYNC Publication Sequences;
Alter Subscription sub1 MERGE Publication Sequences;

Among these, the first three require a new keyword to be introduced. I
prefer to use existing keyword if possible. Any ideas?

I checked kwlist.h and below might also be used. Thought?

- COPY
- UPDATE
- OVERRIDING
- REPLACE
- REASSIGN

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#367Peter Smith
smithpb2250@gmail.com
In reply to: shveta malik (#364)
Re: Logical Replication of sequences

Hi,

I saw a sequence replication patch was committed recently [1]https://github.com/postgres/postgres/commit/96b37849734673e7c82fb86c4f0a46a28f500ac8, so I
was looking at the diffs. Below are a couple of observations:

//////////

1.
The following message seems overly long:
errmsg("publication parameters are not applicable to sequence
synchronization and will be ignored for sequences"));

I saw the message was already discussed here [2]/messages/by-id/CAA4eK1L3SdsMFB6KZ6qEU05wUDtoKS+Osvo9UoGP--qVz2PBrg@mail.gmail.com, but at that time, it
was not shortened much.

How about something shorter? Some examples.
errmsg("publication parameters will be ignored for sequences"));
errmsg("publication parameters will be ignored for sequence replication"));

======

2.
+-- Specifying WITH clause in an ALL SEQUENCES publication will emit a NOTICE.
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withclause
FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL
SEQUENCES WITH (publish_generated_columns = 'stored');
+RESET client_min_messages;

Why not also test WITH('publish_via_partition_root')?

======
[1]: https://github.com/postgres/postgres/commit/96b37849734673e7c82fb86c4f0a46a28f500ac8
[2]: /messages/by-id/CAA4eK1L3SdsMFB6KZ6qEU05wUDtoKS+Osvo9UoGP--qVz2PBrg@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

#368Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#365)
Re: Logical Replication of sequences

On Thu, Oct 9, 2025 at 10:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 8, 2025 at 9:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 5:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have one more question: while testing the sequence sync, I found
this behavior is documented as well[1], but what's the reasoning
behind it? Why REFRESH PUBLICATION will synchronize only newly added
sequences and need to use REFRESH PUBLICATION SEQUENCES to
re-synchronize all sequences.

The idea is that REFRESH PUBLICATION should behave similarly for
tables and sequences. This means that this command is primarily used
to add/remove tables/sequences and copy their respective initial
contents. The new command REFRESH PUBLICATION SEQUENCES is to sync the
existing sequences, it shouldn't add any new sequences, now, if it is
too confusing we can discuss having a different syntax for it.

Sure, let's discuss this when we get this patch at the start of the
commit queue.

I have pushed the publications related patch. Now, we can discuss this
command. I think confusion arises from the fact that both commands use
REFRESH.

Right

So, how about for the second case (sync/copy all existing

sequences), we use a different command, some ideas that come to my
mind are:

Alter Subscription sub1 REPLICATE Publication Sequences;
Alter Subscription sub1 RESYNC Publication Sequences;
Alter Subscription sub1 SYNC Publication Sequences;
Alter Subscription sub1 MERGE Publication Sequences;

Among these, the first three require a new keyword to be introduced. I
prefer to use existing keyword if possible. Any ideas?

I would have preferred "Alter Subscription sub1 SYNC Publication
Sequences" but if your preference is to use existing keywords then
IMHO "MERGE Publication Sequences" or "UPDATE Publication Sequences"
are also good options.

--
Regards,
Dilip Kumar
Google

#369Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#368)
Re: Logical Replication of sequences

On Thu, Oct 9, 2025 at 11:27 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Oct 9, 2025 at 10:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 8, 2025 at 9:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 5:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have one more question: while testing the sequence sync, I found
this behavior is documented as well[1], but what's the reasoning
behind it? Why REFRESH PUBLICATION will synchronize only newly added
sequences and need to use REFRESH PUBLICATION SEQUENCES to
re-synchronize all sequences.

The idea is that REFRESH PUBLICATION should behave similarly for
tables and sequences. This means that this command is primarily used
to add/remove tables/sequences and copy their respective initial
contents. The new command REFRESH PUBLICATION SEQUENCES is to sync the
existing sequences, it shouldn't add any new sequences, now, if it is
too confusing we can discuss having a different syntax for it.

Sure, let's discuss this when we get this patch at the start of the
commit queue.

I have pushed the publications related patch. Now, we can discuss this
command. I think confusion arises from the fact that both commands use
REFRESH.

Right

So, how about for the second case (sync/copy all existing

sequences), we use a different command, some ideas that come to my
mind are:

Alter Subscription sub1 REPLICATE Publication Sequences;
Alter Subscription sub1 RESYNC Publication Sequences;
Alter Subscription sub1 SYNC Publication Sequences;
Alter Subscription sub1 MERGE Publication Sequences;

Among these, the first three require a new keyword to be introduced. I
prefer to use existing keyword if possible. Any ideas?

I would have preferred "Alter Subscription sub1 SYNC Publication
Sequences" but if your preference is to use existing keywords then
IMHO "MERGE Publication Sequences" or "UPDATE Publication Sequences"
are also good options.

I would prefer "COPY Publication Sequences" or "UPDATE Publication
Sequences" among the given options. We have a precedence for copy
(copy_data) in publication command parameters, so, COPY could be a
better option.

--
With Regards,
Amit Kapila.

#370Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#367)
Re: Logical Replication of sequences

On Thu, Oct 9, 2025 at 11:21 AM Peter Smith <smithpb2250@gmail.com> wrote:

I saw a sequence replication patch was committed recently [1], so I
was looking at the diffs. Below are a couple of observations:

//////////

1.
The following message seems overly long:
errmsg("publication parameters are not applicable to sequence
synchronization and will be ignored for sequences"));

I saw the message was already discussed here [2], but at that time, it
was not shortened much.

How about something shorter? Some examples.
errmsg("publication parameters will be ignored for sequences"));
errmsg("publication parameters will be ignored for sequence replication"));

I thought about these alternatives but left in favor of clarity with a
longer message. However, I am fine to change if others also think so.
Let's wait and see if others have an opinion on this point.

======

2.
+-- Specifying WITH clause in an ALL SEQUENCES publication will emit a NOTICE.
+SET client_min_messages = 'NOTICE';
+CREATE PUBLICATION regress_pub_for_allsequences_alltables_withclause
FOR ALL SEQUENCES, ALL TABLES WITH (publish = 'insert');
+CREATE PUBLICATION regress_pub_for_allsequences_withclause FOR ALL
SEQUENCES WITH (publish_generated_columns = 'stored');
+RESET client_min_messages;

Why not also test WITH('publish_via_partition_root')?

It is not required to write a test with all the options, the current
set chosen seems sufficient.

--
With Regards,
Amit Kapila.

#371shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#369)
Re: Logical Replication of sequences

On Thu, Oct 9, 2025 at 11:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 9, 2025 at 11:27 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Oct 9, 2025 at 10:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 8, 2025 at 9:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 5:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have one more question: while testing the sequence sync, I found
this behavior is documented as well[1], but what's the reasoning
behind it? Why REFRESH PUBLICATION will synchronize only newly added
sequences and need to use REFRESH PUBLICATION SEQUENCES to
re-synchronize all sequences.

The idea is that REFRESH PUBLICATION should behave similarly for
tables and sequences. This means that this command is primarily used
to add/remove tables/sequences and copy their respective initial
contents. The new command REFRESH PUBLICATION SEQUENCES is to sync the
existing sequences, it shouldn't add any new sequences, now, if it is
too confusing we can discuss having a different syntax for it.

Sure, let's discuss this when we get this patch at the start of the
commit queue.

I have pushed the publications related patch. Now, we can discuss this
command. I think confusion arises from the fact that both commands use
REFRESH.

Right

So, how about for the second case (sync/copy all existing

sequences), we use a different command, some ideas that come to my
mind are:

Alter Subscription sub1 REPLICATE Publication Sequences;
Alter Subscription sub1 RESYNC Publication Sequences;
Alter Subscription sub1 SYNC Publication Sequences;
Alter Subscription sub1 MERGE Publication Sequences;

Among these, the first three require a new keyword to be introduced. I
prefer to use existing keyword if possible. Any ideas?

I would have preferred "Alter Subscription sub1 SYNC Publication
Sequences" but if your preference is to use existing keywords then
IMHO "MERGE Publication Sequences" or "UPDATE Publication Sequences"
are also good options.

I would prefer "COPY Publication Sequences" or "UPDATE Publication
Sequences" among the given options. We have a precedence for copy
(copy_data) in publication command parameters, so, COPY could be a
better option.

If not SYNC, then COPY looks the next best option to me.

thanks
Shveta

#372Peter Smith
smithpb2250@gmail.com
In reply to: shveta malik (#371)
Re: Logical Replication of sequences

On Thu, Oct 9, 2025 at 5:32 PM shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Oct 9, 2025 at 11:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 9, 2025 at 11:27 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Oct 9, 2025 at 10:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 8, 2025 at 9:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 5:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have one more question: while testing the sequence sync, I found
this behavior is documented as well[1], but what's the reasoning
behind it? Why REFRESH PUBLICATION will synchronize only newly added
sequences and need to use REFRESH PUBLICATION SEQUENCES to
re-synchronize all sequences.

The idea is that REFRESH PUBLICATION should behave similarly for
tables and sequences. This means that this command is primarily used
to add/remove tables/sequences and copy their respective initial
contents. The new command REFRESH PUBLICATION SEQUENCES is to sync the
existing sequences, it shouldn't add any new sequences, now, if it is
too confusing we can discuss having a different syntax for it.

Sure, let's discuss this when we get this patch at the start of the
commit queue.

I have pushed the publications related patch. Now, we can discuss this
command. I think confusion arises from the fact that both commands use
REFRESH.

Right

So, how about for the second case (sync/copy all existing

sequences), we use a different command, some ideas that come to my
mind are:

Alter Subscription sub1 REPLICATE Publication Sequences;
Alter Subscription sub1 RESYNC Publication Sequences;
Alter Subscription sub1 SYNC Publication Sequences;
Alter Subscription sub1 MERGE Publication Sequences;

Among these, the first three require a new keyword to be introduced. I
prefer to use existing keyword if possible. Any ideas?

I would have preferred "Alter Subscription sub1 SYNC Publication
Sequences" but if your preference is to use existing keywords then
IMHO "MERGE Publication Sequences" or "UPDATE Publication Sequences"
are also good options.

I would prefer "COPY Publication Sequences" or "UPDATE Publication
Sequences" among the given options. We have a precedence for copy
(copy_data) in publication command parameters, so, COPY could be a
better option.

If not SYNC, then COPY looks the next best option to me.

Something about all these ideas seems strange to me:

I think the "ALTER SUBSCRIPTION sub REFRESH PUBLICATION" command has
the word PUBLICATION in it because it's the PUBLICATION has changed
(stuff added/removed), so we need to refresh it.

OTOH, the synchronisation of *existing* sequences is different - this
is more like the subscription saying "Just get me updated values for
the sequences I already know about". Therefore, I don't think the word
PUBLICATION is relevant here.

~~

So my suggestion is very different. Just this:
"ALTER SUBSCRIPTION sub REFRESH SEQUENCES"

I feel this is entirely consistent, because:

PUBLICATION objects have changed. Refresh me the new objects => ALTER
SUBSCRIPTION sub REFRESH PUBLICATION;

SEQUENCE values have changed. Refresh me the new values => ALTER
SUBSCRIPTION sub REFRESH SEQUENCES;

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#373Dilip Kumar
dilipbalaut@gmail.com
In reply to: Peter Smith (#372)
Re: Logical Replication of sequences

On Thu, Oct 9, 2025 at 12:30 PM Peter Smith <smithpb2250@gmail.com> wrote:

I think the "ALTER SUBSCRIPTION sub REFRESH PUBLICATION" command has
the word PUBLICATION in it because it's the PUBLICATION has changed
(stuff added/removed), so we need to refresh it.

OTOH, the synchronisation of *existing* sequences is different - this
is more like the subscription saying "Just get me updated values for
the sequences I already know about". Therefore, I don't think the word
PUBLICATION is relevant here.

~~

So my suggestion is very different. Just this:
"ALTER SUBSCRIPTION sub REFRESH SEQUENCES"

I feel this is entirely consistent, because:

PUBLICATION objects have changed. Refresh me the new objects => ALTER
SUBSCRIPTION sub REFRESH PUBLICATION;

SEQUENCE values have changed. Refresh me the new values => ALTER
SUBSCRIPTION sub REFRESH SEQUENCES;

I prefer this suggestion over the previous proposal so +1 from my side.

--
Regards,
Dilip Kumar
Google

#374Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#372)
Re: Logical Replication of sequences

On Thu, Oct 9, 2025 at 12:30 PM Peter Smith <smithpb2250@gmail.com> wrote:

Something about all these ideas seems strange to me:

I think the "ALTER SUBSCRIPTION sub REFRESH PUBLICATION" command has
the word PUBLICATION in it because it's the PUBLICATION has changed
(stuff added/removed), so we need to refresh it.

OTOH, the synchronisation of *existing* sequences is different - this
is more like the subscription saying "Just get me updated values for
the sequences I already know about". Therefore, I don't think the word
PUBLICATION is relevant here.

makes sense.

~~

So my suggestion is very different. Just this:
"ALTER SUBSCRIPTION sub REFRESH SEQUENCES"

+1.

--
With Regards,
Amit Kapila.

#375vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#372)
5 attachment(s)
Re: Logical Replication of sequences

On Thu, 9 Oct 2025 at 12:30, Peter Smith <smithpb2250@gmail.com> wrote:

On Thu, Oct 9, 2025 at 5:32 PM shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Oct 9, 2025 at 11:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 9, 2025 at 11:27 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Oct 9, 2025 at 10:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 8, 2025 at 9:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 7, 2025 at 5:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have one more question: while testing the sequence sync, I found
this behavior is documented as well[1], but what's the reasoning
behind it? Why REFRESH PUBLICATION will synchronize only newly added
sequences and need to use REFRESH PUBLICATION SEQUENCES to
re-synchronize all sequences.

The idea is that REFRESH PUBLICATION should behave similarly for
tables and sequences. This means that this command is primarily used
to add/remove tables/sequences and copy their respective initial
contents. The new command REFRESH PUBLICATION SEQUENCES is to sync the
existing sequences, it shouldn't add any new sequences, now, if it is
too confusing we can discuss having a different syntax for it.

Sure, let's discuss this when we get this patch at the start of the
commit queue.

I have pushed the publications related patch. Now, we can discuss this
command. I think confusion arises from the fact that both commands use
REFRESH.

Right

So, how about for the second case (sync/copy all existing

sequences), we use a different command, some ideas that come to my
mind are:

Alter Subscription sub1 REPLICATE Publication Sequences;
Alter Subscription sub1 RESYNC Publication Sequences;
Alter Subscription sub1 SYNC Publication Sequences;
Alter Subscription sub1 MERGE Publication Sequences;

Among these, the first three require a new keyword to be introduced. I
prefer to use existing keyword if possible. Any ideas?

I would have preferred "Alter Subscription sub1 SYNC Publication
Sequences" but if your preference is to use existing keywords then
IMHO "MERGE Publication Sequences" or "UPDATE Publication Sequences"
are also good options.

I would prefer "COPY Publication Sequences" or "UPDATE Publication
Sequences" among the given options. We have a precedence for copy
(copy_data) in publication command parameters, so, COPY could be a
better option.

If not SYNC, then COPY looks the next best option to me.

Something about all these ideas seems strange to me:

I think the "ALTER SUBSCRIPTION sub REFRESH PUBLICATION" command has
the word PUBLICATION in it because it's the PUBLICATION has changed
(stuff added/removed), so we need to refresh it.

OTOH, the synchronisation of *existing* sequences is different - this
is more like the subscription saying "Just get me updated values for
the sequences I already know about". Therefore, I don't think the word
PUBLICATION is relevant here.

~~

So my suggestion is very different. Just this:
"ALTER SUBSCRIPTION sub REFRESH SEQUENCES"

I feel this is entirely consistent, because:

PUBLICATION objects have changed. Refresh me the new objects => ALTER
SUBSCRIPTION sub REFRESH PUBLICATION;

SEQUENCE values have changed. Refresh me the new values => ALTER
SUBSCRIPTION sub REFRESH SEQUENCES;

+1 for this syntax. Here is an updated patch having the changes for the same.

Regards,
Vignesh

Attachments:

v20251009-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20251009-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From db14452fe9f84a8939c07ced26e6e9f7f1d44c2e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 9 Oct 2025 09:53:05 +0530
Subject: [PATCH v20251009 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 191 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 197 ++----------------
 src/backend/replication/logical/worker.c      |  22 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  14 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 243 insertions(+), 199 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 78b03f0572b..94156513ddf 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -508,13 +508,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 33b7ec7f029..d27f6274188 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -970,7 +970,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..45b6d429558
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,191 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2e3d7ea9a5f..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,78 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch tables and sequences that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
-										   true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1756,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1774,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
@@ -1790,21 +1635,21 @@ AllTablesyncsReady(void)
 }
 
 /*
- * Return whether the subscription currently has any relations.
+ * Return whether the subscription currently has any tables.
  *
- * Note: Unlike HasSubscriptionRelations(), this function relies on cached
- * information for subscription relations. Additionally, it should not be
+ * Note: Unlike HasSubscriptionTables(), this function relies on cached
+ * information for subscription tables. Additionally, it should not be
  * invoked outside of apply or tablesync workers, as MySubscription must be
  * initialized first.
  */
 bool
-HasSubscriptionRelationsCached(void)
+HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 419e478b4c6..a85aca2dceb 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1243,7 +1243,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1365,7 +1365,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1421,7 +1421,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1487,7 +1487,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1622,7 +1622,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2464,7 +2464,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4133,7 +4133,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4623,7 +4623,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
 	 * concurrently add tables to the subscription, the apply worker may not
 	 * process invalidations in time. Consequently,
-	 * HasSubscriptionRelationsCached() might miss the new tables, leading to
+	 * HasSubscriptionTablesCached() might miss the new tables, leading to
 	 * premature advancement of oldest_nonremovable_xid.
 	 *
 	 * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
@@ -4637,7 +4637,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * subscription tables at this stage to prevent unnecessary tuple
 	 * retention.
 	 */
-	if (HasSubscriptionRelationsCached() && !AllTablesyncsReady())
+	if (HasSubscriptionTablesCached() && !AllTablesyncsReady())
 	{
 		TimestampTz now;
 
@@ -5876,7 +5876,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 641bece12c7..931fd46bbe3 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5306,7 +5306,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5364,7 +5364,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8cac623c477..96b7d9821f1 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -96,7 +96,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
 									  bool get_sequences, bool not_ready);
 
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index de003802612..43d3a835cb2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -272,12 +274,16 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
-extern bool HasSubscriptionRelationsCached(void);
+extern bool HasSubscriptionTablesCached(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b7c35372b48..6b1189adeb1 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2923,7 +2923,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20251009-0001-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchtext/x-patch; charset=US-ASCII; name=v20251009-0001-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From 661088714fcc034aabe25942666f0e34e77bb0ba Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20251009 1/5] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 750d262fcca..1413cf5c9cc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1612,8 +1612,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1667,8 +1667,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1692,12 +1692,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1709,8 +1709,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1722,10 +1722,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2322,17 +2322,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2390,13 +2390,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21caf2d43bf..dc0c2886674 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10987,7 +10987,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index dc09d1a3f03..4e445fe0cd7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4361,7 +4361,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

v20251009-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20251009-0005-Documentation-for-sequence-synchronization.patchDownload
From d0032ca193cdd356a2939b3a13535bf29f09efbf Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 7 Oct 2025 20:41:56 +0530
Subject: [PATCH v20251009 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 239 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 +++++-
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 359 insertions(+), 45 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9b3aae8603b..e92e530d0b1 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8199,16 +8199,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8242,7 +8245,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8251,12 +8254,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 39e658b7808..625fffb3d64 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..548aab31960 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index dc4fc29466d..954ca320331 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2041,8 +2041,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2188,6 +2189,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..d2c9f84699d 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

v20251009-0004-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=UTF-8; name=v20251009-0004-New-worker-for-sequence-synchronization-du.patchDownload
From 830c6ecbf1e46c1767d1cc245e43da4354ac2b64 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 9 Oct 2025 10:12:30 +0530
Subject: [PATCH v20251009 4/5] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 3 states:
   - INIT (needs synchronizing)
   - DATASYNC (needs re-synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  23 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  60 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 746 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   | 102 +--
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   8 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 238 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1388 insertions(+), 180 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 94156513ddf..0a9ab03ca87 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..1f3ef004aa3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf46a543364..067c6c68ee8 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -955,8 +954,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1057,7 +1056,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1064,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1080,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1799,7 +1798,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index bac476710bf..3b0245e988b 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1084,7 +1084,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2084,7 +2084,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..99e6f566459 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1625,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1665,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..870b172e52d
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,746 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state DATASYNC, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT or DATASYNC state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT / DATASYNC → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+	}
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..bc0f7988a43 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1645,19 +1615,7 @@ AllTablesyncsReady(void)
 bool
 HasSubscriptionTablesCached(void)
 {
-	bool		started_tx;
-	bool		has_subrels;
-
-	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	return has_subrels;
+	return FetchRelationStates(NULL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index a85aca2dceb..826e021d3f3 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4132,7 +4155,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5575,7 +5601,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5695,8 +5722,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5807,6 +5834,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5826,14 +5857,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5904,6 +5937,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5916,9 +5953,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..4da7298502e 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index b176d5130e4..42c118167ee 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1914,7 +1914,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b51d2b17379..8a2e1d1158a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 96b7d9821f1..4315efc348e 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..bcea652ef61 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..2db16bd7f84 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 43d3a835cb2..252a4228d5b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..c7bcc922ae8 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..ad96e616c02
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,238 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH SEQUENCES should cause sync of new sequences
+# of the publisher, and changes to existing sequences should also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6b1189adeb1..efe726af36c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1629,6 +1629,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251009-0002-Introduce-REFRESH-SEQUENCES-for-subscripti.patchtext/x-patch; charset=US-ASCII; name=v20251009-0002-Introduce-REFRESH-SEQUENCES-for-subscripti.patchDownload
From 4253a9688794cde57dc2715dccf609931c1ffbe0 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 9 Oct 2025 09:45:16 +0530
Subject: [PATCH v20251009 2/5] Introduce "REFRESH SEQUENCES" for subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/commands/subscriptioncmds.c     | 325 ++++++++++++++------
 src/backend/executor/execReplication.c      |   4 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/tablesync.c |   7 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |  10 +-
 src/include/catalog/pg_subscription_rel.h   |  10 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/tools/pgindent/typedefs.list            |   1 +
 10 files changed, 318 insertions(+), 112 deletions(-)

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..78b03f0572b 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionRelations(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,21 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
+			&& !get_tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1413cf5c9cc..bac476710bf 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,7 +107,7 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,6 +768,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
@@ -776,25 +784,46 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				bool		pubisseq;
+				bool		subisseq;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
-										 rv->schemaname, rv->relname);
+				CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+
+				pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+				subisseq = (relkind == RELKIND_SEQUENCE);
+
+				/*
+				 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+				 * treated interchangeably, but ensure that sequences
+				 * (RELKIND_SEQUENCE) match exactly on both publisher and
+				 * subscriber.
+				 */
+				if (pubisseq != subisseq)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 
-				AddSubscriptionRelState(subid, relid, table_state,
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +831,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +913,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +926,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +949,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -943,34 +977,47 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 								  subrel_local_oids, subrel_count, sub->name);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
+			bool		pubisseq;
+			bool		subisseq;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
-									 rv->schemaname, rv->relname);
+			relkind = get_rel_relkind(relid);
+			CheckSubscriptionRelkind(relkind, rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
+			pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+			subisseq = (relkind == RELKIND_SEQUENCE);
+
+			/*
+			 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+			 * treated interchangeably, but ensure that sequences
+			 * (RELKIND_SEQUENCE) match exactly on both publisher and
+			 * subscriber.
+			 */
+			if (pubisseq != subisseq)
+				ereport(ERROR,
+						errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+						errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but relkind \"%c\" on the subscriber",
+							   rv->schemaname, rv->relname, relinfo->relkind, relkind));
+
 			if (!bsearch(&relid, subrel_local_oids,
 						 subrel_count, sizeof(Oid), oid_cmp))
 			{
@@ -978,28 +1025,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1069,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1126,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,11 +1143,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1097,6 +1161,30 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+
+	/* Get local relation list. */
+	subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+	foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+	{
+		Oid			relid = subrel->relid;
+
+		UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+								   InvalidXLogRecPtr, false);
+		ereport(DEBUG1,
+				errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+								get_namespace_name(get_rel_namespace(relid)),
+								get_rel_name(relid),
+								sub->name));
+	}
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1732,6 +1820,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -2007,7 +2107,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2403,11 +2503,15 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		for (i = 0; i < subrel_count; i++)
 		{
 			Oid			relid = subrel_local_oids[i];
-			char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-			char	   *tablename = get_rel_name(relid);
 
-			appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							 schemaname, tablename);
+			if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+			{
+				char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+				char	   *tablename = get_rel_name(relid);
+
+				appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+								 schemaname, tablename);
+			}
 		}
 	}
 
@@ -2593,8 +2697,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2602,15 +2721,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2618,7 +2739,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2636,7 +2757,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
 						 "       FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
@@ -2644,11 +2765,20 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, 'S'::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
 		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2661,7 +2791,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2677,22 +2807,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2700,7 +2839,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..4f0f8a38555 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1120,7 +1120,9 @@ void
 CheckSubscriptionRelkind(char relkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dc0c2886674..a4b29c822e8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10992,6 +10992,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e6da4028d39..2e3d7ea9a5f 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -923,7 +923,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
@@ -1633,8 +1633,9 @@ FetchTableStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 92eb17049c3..36dbdb8771b 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1114,7 +1114,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 64bfd309c9a..8fc04c6ff0a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2318,11 +2318,11 @@ match_previous_words(int pattern_id,
 	/* ALTER SUBSCRIPTION <name> */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny))
 		COMPLETE_WITH("CONNECTION", "ENABLE", "DISABLE", "OWNER TO",
-					  "RENAME TO", "REFRESH PUBLICATION", "SET", "SKIP (",
-					  "ADD PUBLICATION", "DROP PUBLICATION");
-	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
-	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+					  "RENAME TO", "REFRESH PUBLICATION", "REFRESH SEQUENCES",
+					  "SET", "SKIP (", "ADD PUBLICATION", "DROP PUBLICATION");
+	/* ALTER SUBSCRIPTION <name> REFRESH */
+	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH"))
+		COMPLETE_WITH( "PUBLICATION", "SEQUENCES");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..8cac623c477 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4e445fe0cd7..ecbddd12e1b 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4362,6 +4362,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 5290b91e83e..b7c35372b48 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2905,6 +2905,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

#376Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#375)
Re: Logical Replication of sequences

On Thu, Oct 9, 2025 at 3:08 PM vignesh C <vignesh21@gmail.com> wrote:

So my suggestion is very different. Just this:
"ALTER SUBSCRIPTION sub REFRESH SEQUENCES"

I feel this is entirely consistent, because:

PUBLICATION objects have changed. Refresh me the new objects => ALTER
SUBSCRIPTION sub REFRESH PUBLICATION;

SEQUENCE values have changed. Refresh me the new values => ALTER
SUBSCRIPTION sub REFRESH SEQUENCES;

+1 for this syntax. Here is an updated patch having the changes for the same.

Few comments:
=============
1.
+ /* From version 19, inclusion of sequences in the target is supported */
+ if (server_version >= 190000)
+ appendStringInfo(&cmd,
+ "UNION ALL\n"
+ "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS
attrs, 'S'::\"char\" AS relkind\n"

Instead of hard coding 'S', can we use RELKIND_SEQUENCE?

2.
  else
  {
  tableRow[2] = NAMEARRAYOID;
- appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+ appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");

Why the above change?

3.
+ pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+ subisseq = (relkind == RELKIND_SEQUENCE);
+
+ /*
+ * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+ * treated interchangeably, but ensure that sequences
+ * (RELKIND_SEQUENCE) match exactly on both publisher and
+ * subscriber.
+ */
+ if (pubisseq != subisseq)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),

We can directly compare relkind here and avoid having two extra
variables. The same code is present in AlterSubscription_refresh. So,
we can make the similar change there as well. BTW, the same scenario
could happen between table and sequences, no? If so, then we should
deal with that as well. It would be better if can make these checks as
part of CheckSubscriptionRelkind().

4.
+ errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but
relkind \"%c\" on the subscriber",

I would like this message to be bit short and speak in terms of source
and target, something like: errmsg("relation \"%s.%s\" type mismatch:
source \"%c\", target \"%c\"")

5.
+ *
+ * XXX: Currently, a replication slot is created for all
+ * subscriptions, including those for sequence-only publications.
+ * However, this is unnecessary, as incremental synchronization of
+ * sequences is not supported.

Can we try to avoid creating slot/origin for sequence-only
subscriptions? We don't want to make code complicated due to this, so
try to create a top-up patch so that we can evaluate this change
separately?

6.
@@ -2403,11 +2503,15 @@ check_publications_origin(WalReceiverConn
*wrconn, List *publications,
for (i = 0; i < subrel_count; i++)
{
Oid relid = subrel_local_oids[i];
- char    *schemaname = get_namespace_name(get_rel_namespace(relid));
- char    *tablename = get_rel_name(relid);
- appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
- schemaname, tablename);
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)

Why do we have the above check in the 0002 patch? We can add some
comments to clarify the same, if it is required. Also, we should do
this origin check during ALTER SUBSCRIPTION … REFRESH command as we
don't have incremental WAL based filtering of origin for sequences.

--
With Regards,
Amit Kapila.

#377vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#376)
5 attachment(s)
Re: Logical Replication of sequences

On Fri, 10 Oct 2025 at 13:03, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 9, 2025 at 3:08 PM vignesh C <vignesh21@gmail.com> wrote:

So my suggestion is very different. Just this:
"ALTER SUBSCRIPTION sub REFRESH SEQUENCES"

I feel this is entirely consistent, because:

PUBLICATION objects have changed. Refresh me the new objects => ALTER
SUBSCRIPTION sub REFRESH PUBLICATION;

SEQUENCE values have changed. Refresh me the new values => ALTER
SUBSCRIPTION sub REFRESH SEQUENCES;

+1 for this syntax. Here is an updated patch having the changes for the same.

Few comments:
=============
1.
+ /* From version 19, inclusion of sequences in the target is supported */
+ if (server_version >= 190000)
+ appendStringInfo(&cmd,
+ "UNION ALL\n"
+ "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS
attrs, 'S'::\"char\" AS relkind\n"

Instead of hard coding 'S', can we use RELKIND_SEQUENCE?

Modified

2.
else
{
tableRow[2] = NAMEARRAYOID;
- appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+ appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename\n");

Why the above change?

This is not required, removed this change

3.
+ pubisseq = (relinfo->relkind == RELKIND_SEQUENCE);
+ subisseq = (relkind == RELKIND_SEQUENCE);
+
+ /*
+ * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be
+ * treated interchangeably, but ensure that sequences
+ * (RELKIND_SEQUENCE) match exactly on both publisher and
+ * subscriber.
+ */
+ if (pubisseq != subisseq)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),

We can directly compare relkind here and avoid having two extra
variables. The same code is present in AlterSubscription_refresh. So,
we can make the similar change there as well. BTW, the same scenario
could happen between table and sequences, no? If so, then we should
deal with that as well. It would be better if can make these checks as
part of CheckSubscriptionRelkind().

Modified

4.
+ errmsg("relation \"%s.%s\" has relkind \"%c\" on the publisher but
relkind \"%c\" on the subscriber",

I would like this message to be bit short and speak in terms of source
and target, something like: errmsg("relation \"%s.%s\" type mismatch:
source \"%c\", target \"%c\"")

Modified

5.
+ *
+ * XXX: Currently, a replication slot is created for all
+ * subscriptions, including those for sequence-only publications.
+ * However, this is unnecessary, as incremental synchronization of
+ * sequences is not supported.

Can we try to avoid creating slot/origin for sequence-only
subscriptions? We don't want to make code complicated due to this, so
try to create a top-up patch so that we can evaluate this change
separately?

I will do this and post it along with the next version

6
@@ -2403,11 +2503,15 @@ check_publications_origin(WalReceiverConn
*wrconn, List *publications,
for (i = 0; i < subrel_count; i++)
{
Oid relid = subrel_local_oids[i];
- char    *schemaname = get_namespace_name(get_rel_namespace(relid));
- char    *tablename = get_rel_name(relid);
- appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
- schemaname, tablename);
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)

Why do we have the above check in the 0002 patch? We can add some
comments to clarify the same, if it is required. Also, we should do
this origin check during ALTER SUBSCRIPTION … REFRESH command as we
don't have incremental WAL based filtering of origin for sequences.

This check is not required, as there is a scenario where N1
replicates sequences to N2 and N2 replicates sequences to N3. This
warning will be helpful

The attached patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20251011-0002-Introduce-REFRESH-SEQUENCES-for-subscripti.patchtext/x-patch; charset=US-ASCII; name=v20251011-0002-Introduce-REFRESH-SEQUENCES-for-subscripti.patchDownload
From 930165c0449d8cc1449ef69cd121913b7f6f4049 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 9 Oct 2025 09:45:16 +0530
Subject: [PATCH v20251011 2/5] Introduce "REFRESH SEQUENCES" for subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c       |  61 +++-
 src/backend/commands/subscriptioncmds.c     | 380 ++++++++++++++------
 src/backend/executor/execReplication.c      |  24 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/relation.c  |   2 +-
 src/backend/replication/logical/tablesync.c |   7 +-
 src/backend/replication/logical/worker.c    |   4 +-
 src/backend/replication/pgoutput/pgoutput.c |   2 +-
 src/bin/psql/tab-complete.in.c              |  10 +-
 src/include/catalog/pg_subscription_rel.h   |  10 +-
 src/include/executor/executor.h             |   4 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/tools/pgindent/typedefs.list            |   1 +
 13 files changed, 381 insertions(+), 134 deletions(-)

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..78b03f0572b 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionRelations(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,21 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
+			&& !get_tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1413cf5c9cc..2976483b740 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,12 +107,12 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
 									  Oid *subrel_local_oids, int subrel_count,
-									  char *subname);
+									  char *subname, bool only_sequences);
 static void check_pub_dead_tuple_retention(WalReceiverConn *wrconn);
 static void check_duplicates_in_publist(List *publist, Datum *datums);
 static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
@@ -736,6 +737,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +754,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,10 +768,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *relations;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
-									  NULL, 0, stmt->subname);
+									  NULL, 0, stmt->subname, false);
 
 			if (opts.retaindeadtuples)
 				check_pub_dead_tuple_retention(wrconn);
@@ -776,25 +784,28 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			relations = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(SubscriptionRelKind, relinfo, relations)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				RangeVar   *rv = relinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
+				CheckSubscriptionRelkind(relkind, relinfo->relkind,
 										 rv->schemaname, rv->relname);
-
-				AddSubscriptionRelState(subid, relid, table_state,
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +813,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +841,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +895,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +908,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +931,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -940,33 +956,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 		check_publications_origin(wrconn, sub->publications, copy_data,
 								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
+								  subrel_local_oids, subrel_count, sub->name,
+								  false);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known tables. If the table is not known locally create a new state
+		 * for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(SubscriptionRelKind, relinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = relinfo->rv;
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+			relkind = get_rel_relkind(relid);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
+			CheckSubscriptionRelkind(relkind, relinfo->relkind,
 									 rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
@@ -978,28 +992,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1036,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1093,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,11 +1110,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		list_free_deep(sub_remove_rels);
 	}
 	PG_FINALLY();
 	{
@@ -1097,6 +1128,58 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+	char	   *err = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("subscription \"%s\" could not connect to the publisher: %s",
+						sub->name, err));
+
+	PG_TRY();
+	{
+		check_publications_origin(wrconn, sub->publications, false,
+								  sub->retaindeadtuples, sub->origin, NULL, 0,
+								  sub->name, true);
+
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+		foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+		{
+			Oid			relid = subrel->relid;
+
+			UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+									InvalidXLogRecPtr, false);
+			ereport(DEBUG1,
+					errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+									get_namespace_name(get_rel_namespace(relid)),
+									get_rel_name(relid),
+									sub->name));
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1732,6 +1815,18 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("ALTER SUBSCRIPTION ... REFRESH SEQUENCES is not allowed for disabled subscriptions"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1825,7 +1920,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 			check_publications_origin(wrconn, sub->publications, false,
 									  retain_dead_tuples, origin, NULL, 0,
-									  sub->name);
+									  sub->name, false);
 
 			if (update_failover || update_two_phase)
 				walrcv_alter_slot(wrconn, sub->slotname,
@@ -2007,7 +2102,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2343,7 +2438,7 @@ static void
 check_publications_origin(WalReceiverConn *wrconn, List *publications,
 						  bool copydata, bool retain_dead_tuples,
 						  char *origin, Oid *subrel_local_oids,
-						  int subrel_count, char *subname)
+						  int subrel_count, char *subname, bool only_sequences)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
@@ -2369,23 +2464,52 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	 */
 	check_table_sync = copydata && origin_none;
 
-	/* retain_dead_tuples and table sync checks occur separately */
-	Assert(!(check_rdt && check_table_sync));
+	/*
+	 * retain_dead_tuples, table sync and seuqnece sync checks occur
+	 * separately.
+	 */
+	Assert(!(check_rdt && check_table_sync && only_sequences));
 
 	/* Return if no checks are required */
-	if (!check_rdt && !check_table_sync)
+	if (!check_rdt && !check_table_sync && !only_sequences)
 		return;
 
 	initStringInfo(&cmd);
-	appendStringInfoString(&cmd,
-						   "SELECT DISTINCT P.pubname AS pubname\n"
-						   "FROM pg_publication P,\n"
-						   "     LATERAL pg_get_publication_tables(P.pubname) GPT\n"
-						   "     JOIN pg_subscription_rel PS ON (GPT.relid = PS.srrelid OR"
-						   "     GPT.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
-						   "                   SELECT relid FROM pg_partition_tree(PS.srrelid))),\n"
-						   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
-						   "WHERE C.oid = GPT.relid AND P.pubname IN (");
+
+	if (walrcv_server_version(wrconn) < 190000)
+		appendStringInfoString(&cmd,
+							   "SELECT DISTINCT P.pubname AS pubname\n"
+							   "FROM pg_publication P,\n"
+							   "     LATERAL pg_get_publication_tables(P.pubname) GPT\n"
+							   "     JOIN pg_subscription_rel PS ON (GPT.relid = PS.srrelid OR"
+							   "     GPT.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
+							   "                   SELECT relid FROM pg_partition_tree(PS.srrelid))),\n"
+							   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
+							   "WHERE C.oid = GPT.relid AND P.pubname IN (");
+	else
+	{
+		if (only_sequences)
+			appendStringInfoString(&cmd,
+								   "SELECT DISTINCT P.pubname AS pubname\n"
+								   "FROM pg_publication P,\n"
+								   "     LATERAL pg_get_publication_sequences(P.pubname) GPT\n"
+								   "     JOIN pg_subscription_rel PS ON (GPT.relid = PS.srrelid OR"
+								   "     GPT.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
+								   "                   SELECT relid FROM pg_partition_tree(PS.srrelid))),\n"
+								   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
+								   "WHERE C.oid = GPT.relid AND P.pubname IN (");
+		else
+			appendStringInfoString(&cmd,
+								   "SELECT DISTINCT P.pubname AS pubname\n"
+								   "FROM pg_publication P\n"
+								   "     CROSS JOIN LATERAL (SELECT relid FROM pg_get_publication_tables(P.pubname) UNION ALL"
+								   "                   		 SELECT relid FROM pg_get_publication_sequences(P.pubname)) GPT\n"
+								   "     JOIN pg_subscription_rel PS ON (GPT.relid = PS.srrelid OR"
+								   "     GPT.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
+								   "                   SELECT relid FROM pg_partition_tree(PS.srrelid))),\n"
+								   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
+								   "WHERE C.oid = GPT.relid AND P.pubname IN (");
+	}
 	GetPublicationsStr(publications, &cmd, true);
 	appendStringInfoString(&cmd, ")\n");
 
@@ -2454,11 +2578,11 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		/* Prepare the list of publication(s) for warning message. */
 		GetPublicationsStr(publist, pubnames, false);
 
-		if (check_table_sync)
+		if (check_table_sync || only_sequences)
 		{
 			appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
 							 subname);
-			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher tables did not come from other origins."));
+			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher relations did not come from other origins."));
 		}
 		else
 		{
@@ -2470,8 +2594,8 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		ereport(WARNING,
 				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 				errmsg_internal("%s", err_msg->data),
-				errdetail_plural("The subscription subscribes to a publication (%s) that contains tables that are written to by other subscriptions.",
-								 "The subscription subscribes to publications (%s) that contain tables that are written to by other subscriptions.",
+				errdetail_plural("The subscription subscribes to a publication (%s) that contains relations that are written to by other subscriptions.",
+								 "The subscription subscribes to publications (%s) that contain relations that are written to by other subscriptions.",
 								 list_length(publist), pubnames->data),
 				errhint_internal("%s", err_hint->data));
 	}
@@ -2593,8 +2717,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(SubscriptionRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2602,15 +2741,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2618,7 +2759,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2636,14 +2777,26 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
-						 "       FROM pg_class c\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs");
+		if (check_relkind)
+			appendStringInfo(&cmd, ", c.relkind\n");
+
+		appendStringInfo(&cmd, "   FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
 						 "                FROM pg_publication\n"
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2661,7 +2814,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2677,22 +2830,31 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char relkind = RELKIND_RELATION;
+		SubscriptionRelKind *relinfo = (SubscriptionRelKind *) palloc(sizeof(SubscriptionRelKind));
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE && check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2700,7 +2862,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..e3cc838cd38 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1117,13 +1117,33 @@ CheckCmdReplicaIdentity(Relation rel, CmdType cmd)
  * The nspname and relname are only needed for error reporting.
  */
 void
-CheckSubscriptionRelkind(char relkind, const char *nspname,
+CheckSubscriptionRelkind(char relkind, char pubrelkind, const char *nspname,
 						 const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (relkind != RELKIND_RELATION &&
+		relkind != RELKIND_PARTITIONED_TABLE &&
+		relkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
 						nspname, relname),
 				 errdetail_relkind_not_supported(relkind)));
+
+	if (pubrelkind == '\0')
+		return;
+
+	/*
+	 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be treated
+	 * interchangeably, but ensure that sequences (RELKIND_SEQUENCE) match
+	 * exactly on both publisher and subscriber.
+	 */
+	if ((relkind == RELKIND_SEQUENCE && pubrelkind != RELKIND_SEQUENCE) ||
+		((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE) &&
+		 !(pubrelkind == RELKIND_RELATION || pubrelkind == RELKIND_PARTITIONED_TABLE)))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("relation \"%s.%s\" type mismatch: source \"%s\", target \"%s\"",
+					   nspname, relname,
+					   pubrelkind == RELKIND_SEQUENCE ? "sequence" : "table",
+					   relkind == RELKIND_SEQUENCE ? "sequence" : "table"));
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dc0c2886674..a4b29c822e8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10992,6 +10992,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c
index f59046ad620..6ca3addc885 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -424,7 +424,7 @@ logicalrep_rel_open(LogicalRepRelId remoteid, LOCKMODE lockmode)
 		entry->localreloid = relid;
 
 		/* Check for supported relkind. */
-		CheckSubscriptionRelkind(entry->localrel->rd_rel->relkind,
+		CheckSubscriptionRelkind(entry->localrel->rd_rel->relkind, '\0',
 								 remoterel->nspname, remoterel->relname);
 
 		/*
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e6da4028d39..d0284c8edd7 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -923,7 +923,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
@@ -1633,8 +1633,9 @@ FetchTableStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 419e478b4c6..0893cfbef87 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -3366,7 +3366,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 	 * unsupported relkinds; and the set of partitions can change, so checking
 	 * at CREATE/ALTER SUBSCRIPTION would be insufficient.
 	 */
-	CheckSubscriptionRelkind(partrel->rd_rel->relkind,
+	CheckSubscriptionRelkind(partrel->rd_rel->relkind, '\0',
 							 get_namespace_name(RelationGetNamespace(partrel)),
 							 RelationGetRelationName(partrel));
 
@@ -3562,7 +3562,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 					partrel_new = partrelinfo_new->ri_RelationDesc;
 
 					/* Check that new partition also has supported relkind. */
-					CheckSubscriptionRelkind(partrel_new->rd_rel->relkind,
+					CheckSubscriptionRelkind(partrel_new->rd_rel->relkind, '\0',
 											 get_namespace_name(RelationGetNamespace(partrel_new)),
 											 RelationGetRelationName(partrel_new));
 
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 847806b0a2e..a697991fa23 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1137,7 +1137,7 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
+	 * fetch_relation_list. But one can later change the publication so we still
 	 * need to check all the given publication-table mappings and report an
 	 * error if any publications have a different column list.
 	 */
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 64bfd309c9a..8fc04c6ff0a 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2318,11 +2318,11 @@ match_previous_words(int pattern_id,
 	/* ALTER SUBSCRIPTION <name> */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny))
 		COMPLETE_WITH("CONNECTION", "ENABLE", "DISABLE", "OWNER TO",
-					  "RENAME TO", "REFRESH PUBLICATION", "SET", "SKIP (",
-					  "ADD PUBLICATION", "DROP PUBLICATION");
-	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
-	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+					  "RENAME TO", "REFRESH PUBLICATION", "REFRESH SEQUENCES",
+					  "SET", "SKIP (", "ADD PUBLICATION", "DROP PUBLICATION");
+	/* ALTER SUBSCRIPTION <name> REFRESH */
+	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH"))
+		COMPLETE_WITH( "PUBLICATION", "SEQUENCES");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..8cac623c477 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -82,6 +83,12 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct SubscriptionRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} SubscriptionRelKind;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
@@ -90,7 +97,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 3248e78cd28..e2fbf5b4699 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -784,8 +784,8 @@ extern void ExecSimpleRelationDelete(ResultRelInfo *resultRelInfo,
 									 TupleTableSlot *searchslot);
 extern void CheckCmdReplicaIdentity(Relation rel, CmdType cmd);
 
-extern void CheckSubscriptionRelkind(char relkind, const char *nspname,
-									 const char *relname);
+extern void CheckSubscriptionRelkind(char relkind, char pubrelkind,
+									 const char *nspname, const char *relname);
 
 /*
  * prototypes from functions in nodeModifyTable.c
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4e445fe0cd7..ecbddd12e1b 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4362,6 +4362,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 5290b91e83e..b7c35372b48 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2905,6 +2905,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.43.0

v20251011-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchtext/x-patch; charset=US-ASCII; name=v20251011-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 254dc0c4ddf8f873ffdf9ee98fb72e8af1a01fa1 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sat, 11 Oct 2025 12:16:04 +0530
Subject: [PATCH v20251011 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 191 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 197 ++----------------
 src/backend/replication/logical/worker.c      |  22 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  14 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 243 insertions(+), 199 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 78b03f0572b..94156513ddf 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -508,13 +508,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 33b7ec7f029..d27f6274188 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -970,7 +970,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..45b6d429558
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,191 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index d0284c8edd7..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,78 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch tables and sequences that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
-										   true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1756,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1774,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
@@ -1790,21 +1635,21 @@ AllTablesyncsReady(void)
 }
 
 /*
- * Return whether the subscription currently has any relations.
+ * Return whether the subscription currently has any tables.
  *
- * Note: Unlike HasSubscriptionRelations(), this function relies on cached
- * information for subscription relations. Additionally, it should not be
+ * Note: Unlike HasSubscriptionTables(), this function relies on cached
+ * information for subscription tables. Additionally, it should not be
  * invoked outside of apply or tablesync workers, as MySubscription must be
  * initialized first.
  */
 bool
-HasSubscriptionRelationsCached(void)
+HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 0893cfbef87..3ba61bfe199 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1243,7 +1243,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1365,7 +1365,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1421,7 +1421,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1487,7 +1487,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1622,7 +1622,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2464,7 +2464,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4133,7 +4133,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4623,7 +4623,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
 	 * concurrently add tables to the subscription, the apply worker may not
 	 * process invalidations in time. Consequently,
-	 * HasSubscriptionRelationsCached() might miss the new tables, leading to
+	 * HasSubscriptionTablesCached() might miss the new tables, leading to
 	 * premature advancement of oldest_nonremovable_xid.
 	 *
 	 * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
@@ -4637,7 +4637,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * subscription tables at this stage to prevent unnecessary tuple
 	 * retention.
 	 */
-	if (HasSubscriptionRelationsCached() && !AllTablesyncsReady())
+	if (HasSubscriptionTablesCached() && !AllTablesyncsReady())
 	{
 		TimestampTz now;
 
@@ -5876,7 +5876,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 641bece12c7..931fd46bbe3 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5306,7 +5306,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5364,7 +5364,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 8cac623c477..96b7d9821f1 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -96,7 +96,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
 									  bool get_sequences, bool not_ready);
 
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index de003802612..43d3a835cb2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -272,12 +274,16 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
-extern bool HasSubscriptionRelationsCached(void);
+extern bool HasSubscriptionTablesCached(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b7c35372b48..6b1189adeb1 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2923,7 +2923,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.43.0

v20251011-0005-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20251011-0005-Documentation-for-sequence-synchronization.patchDownload
From 581ff1d3a2ff51f44761b7ea1070d2cb32aaeb3c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 7 Oct 2025 20:41:56 +0530
Subject: [PATCH v20251011 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 239 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 +++++-
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 359 insertions(+), 45 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9b3aae8603b..e92e530d0b1 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8199,16 +8199,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8242,7 +8245,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8251,12 +8254,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 39e658b7808..625fffb3d64 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..548aab31960 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index dc4fc29466d..954ca320331 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2041,8 +2041,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2188,6 +2189,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..d2c9f84699d 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

v20251011-0004-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=UTF-8; name=v20251011-0004-New-worker-for-sequence-synchronization-du.patchDownload
From 2d4a457a74f191f110b22011ae8dfb63167bc863 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 9 Oct 2025 10:12:30 +0530
Subject: [PATCH v20251011 4/5] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 3 states:
   - INIT (needs synchronizing)
   - DATASYNC (needs re-synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  23 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  60 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 746 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   | 102 +--
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   8 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 238 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1388 insertions(+), 180 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 94156513ddf..0a9ab03ca87 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..1f3ef004aa3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf46a543364..067c6c68ee8 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -955,8 +954,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1057,7 +1056,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1064,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1080,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1799,7 +1798,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 2976483b740..16a72414ed2 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1051,7 +1051,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2079,7 +2079,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..99e6f566459 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1625,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1665,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..870b172e52d
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,746 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state DATASYNC, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT or DATASYNC state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT / DATASYNC → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+	}
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..bc0f7988a43 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1645,19 +1615,7 @@ AllTablesyncsReady(void)
 bool
 HasSubscriptionTablesCached(void)
 {
-	bool		started_tx;
-	bool		has_subrels;
-
-	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	return has_subrels;
+	return FetchRelationStates(NULL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 3ba61bfe199..e1dbce296c7 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4132,7 +4155,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5575,7 +5601,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5695,8 +5722,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5807,6 +5834,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5826,14 +5857,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5904,6 +5937,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5916,9 +5953,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..4da7298502e 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index b176d5130e4..42c118167ee 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1914,7 +1914,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b51d2b17379..8a2e1d1158a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 96b7d9821f1..4315efc348e 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,6 +89,22 @@ typedef struct SubscriptionRelKind
 	char		relkind;
 } SubscriptionRelKind;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..bcea652ef61 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..2db16bd7f84 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 43d3a835cb2..252a4228d5b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..c7bcc922ae8 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..ad96e616c02
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,238 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH SEQUENCES should cause sync of new sequences
+# of the publisher, and changes to existing sequences should also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6b1189adeb1..efe726af36c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1629,6 +1629,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251011-0001-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchtext/x-patch; charset=US-ASCII; name=v20251011-0001-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From 3da090ca81f1d9bdfb19b4a03500dfbb55802342 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20251011 1/5] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 48 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 750d262fcca..1413cf5c9cc 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1612,8 +1612,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1667,8 +1667,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1692,12 +1692,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1709,8 +1709,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1722,10 +1722,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2322,17 +2322,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2390,13 +2390,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21caf2d43bf..dc0c2886674 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10987,7 +10987,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index dc09d1a3f03..4e445fe0cd7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4361,7 +4361,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.43.0

#378Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#377)
Re: Logical Replication of sequences

HI Vignesh,

Here are some minor review comments for patches 0001 and 0002.

////////////////////
Patch 0001
////////////////////

AlterSubscription:

1.1.
  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled
subscriptions")));
+ errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed
for disabled subscriptions")));

Maybe this could use a parameter substitution like:

errmsg("%s is not allowed for disabled subscriptions", "ALTER
SUBSCRIPTION ... REFRESH PUBLICATION");

That way (in preparation for the next patch), there will be only 1
message requiring translation.

////////////////////
Patch 0002
////////////////////

Commit message:

2.1
"This command update the sequence entries present in the..."

/update/updates/

======

AlterSubscription:

2.2
+ if (!sub->enabled)
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("ALTER SUBSCRIPTION ... REFRESH SEQUENCES is not allowed for
disabled subscriptions"));

Can use the same message with parameter substitution as mentioned above (#1.1)

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#379Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#377)
Re: Logical Replication of sequences

On Sat, Oct 11, 2025 at 7:42 PM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.

I have a few more comments on 0002 patch:
1. In check_publications_origin(), isn't it better to name
check_table_sync as check_sync as it is used for both tables and
sequences?

2. In check_publications_origin(), for all three queries, only the
following part seems to be different:

< 19
" LATERAL pg_get_publication_tables(P.pubname) GPT\n"

=19

only_sequences
" LATERAL pg_get_publication_sequences(P.pubname) GPT\n"
else
" CROSS JOIN LATERAL (SELECT relid FROM
pg_get_publication_tables(P.pubname) UNION ALL"
" SELECT relid FROM
pg_get_publication_sequences(P.pubname)) GPT\n"

2A. Can this part of the query be made dynamic, and then we can have a
query instead of three? If so, I think it would simplify the code.
What do you think?
2B. Can we add/modify the comment atop check_publications_origin to
mention about sequence case?

3.
 void
-CheckSubscriptionRelkind(char relkind, const char *nspname,
+CheckSubscriptionRelkind(char relkind, char pubrelkind, const char *nspname,
  const char *relname)
 {
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION &&
+ relkind != RELKIND_PARTITIONED_TABLE &&
+ relkind != RELKIND_SEQUENCE)
  ereport(ERROR,
  (errcode(ERRCODE_WRONG_OBJECT_TYPE),
  errmsg("cannot use relation \"%s.%s\" as logical replication target",
  nspname, relname),
  errdetail_relkind_not_supported(relkind)));
+
+ if (pubrelkind == '\0')
+ return;

This looks ad hoc. I think it would be better if the caller passes the
same value for local and remote relkind to this function. And
accordingly, change the name of the first two parameters.

4.
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
…
+ check_publications_origin(wrconn, sub->publications, false,
+   sub->retaindeadtuples, sub->origin, NULL, 0,
+   sub->name, true);

Write a few comments to explain why it is necessary to check origins
in this case. If the additional comments atop
check_publications_origin() cover this case, then it's okay as it is.

5.
AlterSubscription_refresh()
- sub_remove_rels[remove_rel_len].relid = relid;
- sub_remove_rels[remove_rel_len++].state = state;
…
- char originname[NAMEDATALEN];
+ SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+ rel->relid = relid;
+ rel->state = state;
+
+ sub_remove_rels = lappend(sub_remove_rels, rel);

Why do we change an offset based array into list? It looks slightly
odd that in the same function one of the other similar array
pubrel_local_oids is not converted when the above is converted. And
even if we do so, I don't think we need a retail free
(list_free_deep(sub_remove_rels);) as the memory allocation here is in
portal context which should be reset by end of the current statement
execution.

--
With Regards,
Amit Kapila.

#380shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#379)
Re: Logical Replication of sequences

Please find few initial comments for 002:

1)
Patch commit msg says:

"This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH SEQUENCES
This command update the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization."

But AlterSubscription_refresh_seq actually updates the state to DATASYNC.

2)
CheckSubscriptionRelkind()

+ /*
+ * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be treated
+ * interchangeably, but ensure that sequences (RELKIND_SEQUENCE) match
+ * exactly on both publisher and subscriber.
+ */
+ if ((relkind == RELKIND_SEQUENCE && pubrelkind != RELKIND_SEQUENCE) ||
+ ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE) &&
+ !(pubrelkind == RELKIND_RELATION || pubrelkind == RELKIND_PARTITIONED_TABLE)))
+ ereport(ERROR,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("relation \"%s.%s\" type mismatch: source \"%s\", target \"%s\"",
+    nspname, relname,
+    pubrelkind == RELKIND_SEQUENCE ? "sequence" : "table",
+    relkind == RELKIND_SEQUENCE ? "sequence" : "table"));

Shall we simplify the check as:
if ((relkind == RELKIND_SEQUENCE && pubrelkind != RELKIND_SEQUENCE) ||
(relkind != RELKIND_SEQUENCE && pubrelkind == RELKIND_SEQUENCE))

3)
CreateSubscription()
+ relations = fetch_relation_list(wrconn, publications);

Can we please rename 'relations' to 'pubrels', as the latter gives
more clarity and is in consistency with AlterSubscription_refresh().

4)
- CheckSubscriptionRelkind(get_rel_relkind(relid),
+ CheckSubscriptionRelkind(relkind, relinfo->relkind,
  rv->schemaname, rv->relname);

We are passing 2 relkinds now to CheckSubscriptionRelkind() but it is
difficult to understand which is which. Can we please rename relinfo
as pubrelinfo so that we get clarity. This is in both
CreateSubscription and AlterSubscription_refresh/

5)
CheckSubscriptionRelkind
+ if (pubrelkind == '\0')
+ return;

Do you think, we shall write a comment in the function header that the
caller who wants to verify only the supported type should pass
pubrelkind as '\0'?

6)
Should we update doc of pg_subscription_rel where it says this:

This catalog only contains tables known to the subscription after
running either CREATE SUBSCRIPTION or ALTER SUBSCRIPTION ... REFRESH
PUBLICATION.

thanks
Shveta

#381Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#377)
RE: Logical Replication of sequences

Dear Vignesh,

Thanks for updating the patch. Here are comments for 0002.

```
+       if (pubrelkind == '\0')
+               return;
```

Instead of adding this part, can we provide another function which only checks
the type mismatch? New one can be called from CreateSubscription() and
AlterSubscription_refresh().

```
+#include "nodes/primnodes.h"
...
+typedef struct SubscriptionRelKind
+{
+       RangeVar   *rv;
+       char            relkind;
+}
```

The data structure is used in subscriptioncmds.c. Can we move the definition to
the file?
Also, `relkind` indicates the type of relation on publisher. Can you clarify
the point like `relkind_on_pub`?

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#382shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#380)
Re: Logical Replication of sequences

On Mon, Oct 13, 2025 at 12:57 PM shveta malik <shveta.malik@gmail.com> wrote:

Please find few initial comments for 002:

7)

Currently CREATE SUB makes the state 'i' for sequences in
pg_subscription_rel while ALTER SUB REFRESH SEQ makes the state as
'd'. I think we do not need to maintain 2 different states here. We
can have both CREATE SUB and ALTER SUB make it as 'i'. For tables, we
need multiple states as we first do copy and then apply changes as
well. But for sequences, that is not the case. So we can have only 2
states: 'i' (needs sync) and 'r' (ready) for sequences. We can update
comments to indicate the same.

thanks
Shveta

#383Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Hayato Kuroda (Fujitsu) (#381)
5 attachment(s)
RE: Logical Replication of sequences

On Tuesday, October 14, 2025 11:13 AM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:

Dear Vignesh,

Thanks for updating the patch. Here are comments for 0002.

```
+       if (pubrelkind == '\0')
+               return;
```

Instead of adding this part, can we provide another function which only checks
the type mismatch? New one can be called from CreateSubscription() and
AlterSubscription_refresh().

Per analysis, the checks in the function should be performed for all the cases,
so I did not add a new function.

```
+#include "nodes/primnodes.h"
...
+typedef struct SubscriptionRelKind
+{
+       RangeVar   *rv;
+       char            relkind;
+}
```

The data structure is used in subscriptioncmds.c. Can we move the definition
to the file?
Also, `relkind` indicates the type of relation on publisher. Can you clarify the
point like `relkind_on_pub`?

I chose to change the type name from SubscriptionRelKind to
PublicationRelKind since it is used to describe the relation on publisher.

Attach the latest patch that includes the following changes:

0001:
* Addressed Peter's comments[1]/messages/by-id/CAHut+PuDCMu5QDmAo+MW0hKSThACfqfaPBGcwrBOUFE3RUPP=w@mail.gmail.com

0002:
* Addressed Amit's comments[2]/messages/by-id/CAA4eK1+SMY-dEhnFw8wXYSygk4Xr+SZJ-zEnuhxb+FmFrN0AzQ@mail.gmail.com
* Addressed Shveta's comments[3]/messages/by-id/CAJpy0uC5H0jtmUEN8ES_PAMaYCfjmqEVuJiCdB=Aa98ivqc9FA@mail.gmail.com
* Addressed Kuroda's comments[4]/messages/by-id/OSCPR01MB149667963060BB6A068B275B9F5EBA@OSCPR01MB14966.jpnprd01.prod.outlook.com
* Fixed an issue where check_publications_origin checked
partitions and ancestors of sequences that were unnecessary.
* Fixed an issue where check_publications_origin performed redundant checks on
the origin, even when origin option is set to ANY.
* Fixed an issue where check_publications_origin unnecessarily checked sequences
when only table checks are required, particularly when retain_dead_tuples is
true and the origin is set to ANY.
* Changed CheckSubscriptionRelkind to do the relkind match check during replication
as well. This ensures detection of relkind mismatches, when a local table on
the subscriber is dropped and subsequently replaced by a new sequence with the
same name after the initial sync.

0003~0005:
Unchanged.

TODO:
* The latest comment from Shveta[5]/messages/by-id/CAJpy0uBpxor5EaSDFd0u2kXV5zgEkSq7g6iaSNwVXY0U1Rk4iA@mail.gmail.com.
* The comment from Amit[6]/messages/by-id/CAA4eK1J=gc8WXVc2Hy0Xcq4KtWU-z-dxBiZHbT62jz3QPBZ5CQ@mail.gmail.com to avoid creating slot/origin for sequence only subscription.

[1]: /messages/by-id/CAHut+PuDCMu5QDmAo+MW0hKSThACfqfaPBGcwrBOUFE3RUPP=w@mail.gmail.com
[2]: /messages/by-id/CAA4eK1+SMY-dEhnFw8wXYSygk4Xr+SZJ-zEnuhxb+FmFrN0AzQ@mail.gmail.com
[3]: /messages/by-id/CAJpy0uC5H0jtmUEN8ES_PAMaYCfjmqEVuJiCdB=Aa98ivqc9FA@mail.gmail.com
[4]: /messages/by-id/OSCPR01MB149667963060BB6A068B275B9F5EBA@OSCPR01MB14966.jpnprd01.prod.outlook.com
[5]: /messages/by-id/CAJpy0uBpxor5EaSDFd0u2kXV5zgEkSq7g6iaSNwVXY0U1Rk4iA@mail.gmail.com
[6]: /messages/by-id/CAA4eK1J=gc8WXVc2Hy0Xcq4KtWU-z-dxBiZHbT62jz3QPBZ5CQ@mail.gmail.com

Best Regards,
Hou zj

Attachments:

v20251014-0005-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20251014-0005-Documentation-for-sequence-synchronization.patchDownload
From 6c475437c2eb032dc0798f720d0e70110058724c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 7 Oct 2025 20:41:56 +0530
Subject: [PATCH v20251014 5/5] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 239 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 +++++-
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 359 insertions(+), 45 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 1bfd8cae5ae..e92e530d0b1 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8199,16 +8199,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables and sequences known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog only contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8242,7 +8245,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8251,12 +8254,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 39e658b7808..625fffb3d64 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..548aab31960 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index dc4fc29466d..954ca320331 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2041,8 +2041,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2188,6 +2189,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..d2c9f84699d 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.51.0.windows.1

v20251014-0001-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchapplication/octet-stream; name=v20251014-0001-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patchDownload
From 01f23980c1fd2f47c13bdf0f887620898eb4e154 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 21 Aug 2025 12:08:12 +0530
Subject: [PATCH v20251014 1/2] Update ALTER SUBSCRIPTION REFRESH to ALTER
 SUBSCRIPTION REFRESH PUBLICATION

This patch updates ALTER SUBSCRIPTION REFRESH to
ALTER SUBSCRIPTION REFRESH PUBLICATION for improved clarity and
extensibility, especially as REFRESH operations is being extended
to sequences.
---
 src/backend/commands/subscriptioncmds.c    | 49 +++++++++++-----------
 src/backend/parser/gram.y                  |  2 +-
 src/include/nodes/parsenodes.h             |  2 +-
 src/test/regress/expected/subscription.out |  4 +-
 4 files changed, 29 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 750d262fcca..ada9c7eb804 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1612,8 +1612,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 								 errhint("Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1667,8 +1667,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 										 "ALTER SUBSCRIPTION ... DROP PUBLICATION ... WITH (refresh = false)")));
 
 					/*
-					 * See ALTER_SUBSCRIPTION_REFRESH for details why this is
-					 * not allowed.
+					 * See ALTER_SUBSCRIPTION_REFRESH_PUBLICATION for details
+					 * why this is not allowed.
 					 */
 					if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 						ereport(ERROR,
@@ -1692,12 +1692,13 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
-		case ALTER_SUBSCRIPTION_REFRESH:
+		case ALTER_SUBSCRIPTION_REFRESH_PUBLICATION:
 			{
 				if (!sub->enabled)
 					ereport(ERROR,
 							(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions")));
+							 errmsg("%s is not allowed for disabled subscriptions",
+									"ALTER SUBSCRIPTION ... REFRESH PUBLICATION")));
 
 				parse_subscription_options(pstate, stmt->options,
 										   SUBOPT_COPY_DATA, &opts);
@@ -1709,8 +1710,8 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				 *
 				 * But, having reached this two-phase commit "enabled" state
 				 * we must not allow any subsequent table initialization to
-				 * occur. So the ALTER SUBSCRIPTION ... REFRESH is disallowed
-				 * when the user had requested two_phase = on mode.
+				 * occur. So the ALTER SUBSCRIPTION ... REFRESH PUBLICATION is
+				 * disallowed when the user had requested two_phase = on mode.
 				 *
 				 * The exception to this restriction is when copy_data =
 				 * false, because when copy_data is false the tablesync will
@@ -1722,10 +1723,10 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && opts.copy_data)
 					ereport(ERROR,
 							(errcode(ERRCODE_SYNTAX_ERROR),
-							 errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed when two_phase is enabled"),
-							 errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
+							 errmsg("ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data is not allowed when two_phase is enabled"),
+							 errhint("Use ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data = false, or use DROP/CREATE SUBSCRIPTION.")));
 
-				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH");
+				PreventInTransactionBlock(isTopLevel, "ALTER SUBSCRIPTION ... REFRESH PUBLICATION");
 
 				AlterSubscription_refresh(sub, opts.copy_data, NULL);
 
@@ -2322,17 +2323,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  * it's a partitioned table), from some other publishers. This check is
  * required in the following scenarios:
  *
- * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "copy_data = true" and "origin = none":
+ * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
  *    - This check is skipped for tables already added, as incremental sync via
  *      WAL allows origin tracking. The list of such tables is in
  *      subrel_local_oids.
  *
- * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH statements
- *    with "retain_dead_tuples = true" and "origin = any", and for ALTER
- *    SUBSCRIPTION statements that modify retain_dead_tuples or origin, or
- *    when the publisher's status changes (e.g., due to a connection string
+ * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *    statements with "retain_dead_tuples = true" and "origin = any", and for
+ *    ALTER SUBSCRIPTION statements that modify retain_dead_tuples or origin,
+ *    or when the publisher's status changes (e.g., due to a connection string
  *    update):
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
@@ -2390,13 +2391,13 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	appendStringInfoString(&cmd, ")\n");
 
 	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH, subrel_local_oids contains
-	 * the list of relation oids that are already present on the subscriber.
-	 * This check should be skipped for these tables if checking for table
-	 * sync scenario. However, when handling the retain_dead_tuples scenario,
-	 * ensure all tables are checked, as some existing tables may now include
-	 * changes from other origins due to newly created subscriptions on the
-	 * publisher.
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relation oids that are already
+	 * present on the subscriber. This check should be skipped for these
+	 * tables if checking for table sync scenario. However, when handling the
+	 * retain_dead_tuples scenario, ensure all tables are checked, as some
+	 * existing tables may now include changes from other origins due to newly
+	 * created subscriptions on the publisher.
 	 */
 	if (check_table_sync)
 	{
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 21caf2d43bf..dc0c2886674 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10987,7 +10987,7 @@ AlterSubscriptionStmt:
 					AlterSubscriptionStmt *n =
 						makeNode(AlterSubscriptionStmt);
 
-					n->kind = ALTER_SUBSCRIPTION_REFRESH;
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_PUBLICATION;
 					n->subname = $3;
 					n->options = $6;
 					$$ = (Node *) n;
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index dc09d1a3f03..4e445fe0cd7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4361,7 +4361,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_SET_PUBLICATION,
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
-	ALTER_SUBSCRIPTION_REFRESH,
+	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/regress/expected/subscription.out b/src/test/regress/expected/subscription.out
index c7f1266fc2f..ae278e26b3a 100644
--- a/src/test/regress/expected/subscription.out
+++ b/src/test/regress/expected/subscription.out
@@ -107,7 +107,7 @@ HINT:  To initiate replication, you must manually create the replication slot, e
 ALTER SUBSCRIPTION regress_testsub3 ENABLE;
 ERROR:  cannot enable subscription that does not have a slot name
 ALTER SUBSCRIPTION regress_testsub3 REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION is not allowed for disabled subscriptions
 -- fail - origin must be either none or any
 CREATE SUBSCRIPTION regress_testsub4 CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (slot_name = NONE, connect = false, origin = foo);
 ERROR:  unrecognized origin value: "foo"
@@ -352,7 +352,7 @@ ERROR:  ALTER SUBSCRIPTION with refresh cannot run inside a transaction block
 END;
 BEGIN;
 ALTER SUBSCRIPTION regress_testsub REFRESH PUBLICATION;
-ERROR:  ALTER SUBSCRIPTION ... REFRESH cannot run inside a transaction block
+ERROR:  ALTER SUBSCRIPTION ... REFRESH PUBLICATION cannot run inside a transaction block
 END;
 CREATE FUNCTION func() RETURNS VOID AS
 $$ ALTER SUBSCRIPTION regress_testsub SET PUBLICATION mypub WITH (refresh = true) $$ LANGUAGE SQL;
-- 
2.51.0.windows.1

v20251014-0002-Introduce-REFRESH-SEQUENCES-for-subscripti.patchapplication/octet-stream; name=v20251014-0002-Introduce-REFRESH-SEQUENCES-for-subscripti.patchDownload
From 1fbc04d4771b38385dd021c5f2ba6e1acfe0e257 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 9 Oct 2025 09:45:16 +0530
Subject: [PATCH v20251014 2/2] Introduce "REFRESH SEQUENCES" for subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH SEQUENCES
This command updates the sequence entries present in the
pg_subscription_rel catalog table with the DATASYNC state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                  |   2 +-
 src/backend/catalog/pg_subscription.c       |  61 ++-
 src/backend/commands/subscriptioncmds.c     | 402 ++++++++++++++------
 src/backend/executor/execReplication.c      |  27 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/relation.c  |   1 +
 src/backend/replication/logical/tablesync.c |   7 +-
 src/backend/replication/logical/worker.c    |   2 +
 src/backend/replication/pgoutput/pgoutput.c |   6 +-
 src/bin/psql/tab-complete.in.c              |  10 +-
 src/include/catalog/pg_subscription_rel.h   |   4 +-
 src/include/executor/executor.h             |   4 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/tools/pgindent/typedefs.list            |   1 +
 14 files changed, 388 insertions(+), 149 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9b3aae8603b..1bfd8cae5ae 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8205,7 +8205,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
+   This catalog only contains tables and sequences known to the subscription after running
    either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
    <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
    PUBLICATION</command></link>.
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..78b03f0572b 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionRelations(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionRelations(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,21 @@ HasSubscriptionRelations(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
+			&& !get_tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index ada9c7eb804..2583534b64f 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -27,6 +27,7 @@
 #include "catalog/objectaddress.h"
 #include "catalog/pg_authid_d.h"
 #include "catalog/pg_database_d.h"
+#include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription.h"
 #include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
@@ -106,12 +107,18 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+typedef struct PublicationRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+}			PublicationRelKind;
+
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
 									  Oid *subrel_local_oids, int subrel_count,
-									  char *subname);
+									  char *subname, bool only_sequences);
 static void check_pub_dead_tuple_retention(WalReceiverConn *wrconn);
 static void check_duplicates_in_publist(List *publist, Datum *datums);
 static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
@@ -736,6 +743,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
@@ -747,9 +760,6 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,10 +774,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *pubrels;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
-									  NULL, 0, stmt->subname);
+									  NULL, 0, stmt->subname, false);
 
 			if (opts.retaindeadtuples)
 				check_pub_dead_tuple_retention(wrconn);
@@ -776,25 +790,28 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			pubrels = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				RangeVar   *rv = pubrelinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
+				CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 										 rv->schemaname, rv->relname);
-
-				AddSubscriptionRelState(subid, relid, table_state,
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +819,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +847,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +901,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +914,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +937,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -940,33 +962,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 		check_publications_origin(wrconn, sub->publications, copy_data,
 								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
-
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+								  subrel_local_oids, subrel_count, sub->name,
+								  false);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known relations. If the relation is not known locally create a new
+		 * state for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = pubrelinfo->rv;
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+			relkind = get_rel_relkind(relid);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
+			CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 									 rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
@@ -978,28 +998,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1042,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1099,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,7 +1116,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
@@ -1097,6 +1132,58 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with DATASYNC state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+	char	   *err = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("subscription \"%s\" could not connect to the publisher: %s",
+					   sub->name, err));
+
+	PG_TRY();
+	{
+		check_publications_origin(wrconn, sub->publications, false,
+								  sub->retaindeadtuples, sub->origin, NULL, 0,
+								  sub->name, true);
+
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+		foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+		{
+			Oid			relid = subrel->relid;
+
+			UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_DATASYNC,
+									   InvalidXLogRecPtr, false);
+			ereport(DEBUG1,
+					errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to DATASYNC state",
+									get_namespace_name(get_rel_namespace(relid)),
+									get_rel_name(relid),
+									sub->name));
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1733,6 +1820,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("%s is not allowed for disabled subscriptions",
+								   "ALTER SUBSCRIPTION ... REFRESH SEQUENCES"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1826,7 +1926,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 			check_publications_origin(wrconn, sub->publications, false,
 									  retain_dead_tuples, origin, NULL, 0,
-									  sub->name);
+									  sub->name, false);
 
 			if (update_failover || update_two_phase)
 				walrcv_alter_slot(wrconn, sub->slotname,
@@ -2008,7 +2108,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2318,17 +2418,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
 }
 
 /*
- * Check and log a warning if the publisher has subscribed to the same table,
- * its partition ancestors (if it's a partition), or its partition children (if
- * it's a partitioned table), from some other publishers. This check is
- * required in the following scenarios:
+ * Check and log a warning if the publisher has subscribed to the same relation
+ * (table or sequence), its partition ancestors (if it's a partition), or its
+ * partition children (if it's a partitioned table), from some other publishers.
+ * This check is required in the following scenarios:
  *
  * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
- *    - This check is skipped for tables already added, as incremental sync via
- *      WAL allows origin tracking. The list of such tables is in
- *      subrel_local_oids.
+ *    - This check is skipped for tables and sequences already added, as
+ *      incremental sync via WAL allows origin tracking. The list of such tables
+ *      is in subrel_local_oids.
  *
  * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "retain_dead_tuples = true" and "origin = any", and for
@@ -2338,13 +2438,19 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
  *      details for reliable conflict detection.
+ *    - This check targets for tables only.
  *    - See comments atop worker.c for more details.
+ *
+ * 3) For ALTER SUBSCRIPTION ... REFRESH SEQUENCE statements with "copy_data =
+ *    true" and "origin = none":
+ *    - Warn the user that sequence data from another origin might have been
+ *      copied.
  */
 static void
 check_publications_origin(WalReceiverConn *wrconn, List *publications,
 						  bool copydata, bool retain_dead_tuples,
 						  char *origin, Oid *subrel_local_oids,
-						  int subrel_count, char *subname)
+						  int subrel_count, char *subname, bool only_sequences)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
@@ -2353,9 +2459,10 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	List	   *publist = NIL;
 	int			i;
 	bool		check_rdt;
-	bool		check_table_sync;
+	bool		check_sync;
 	bool		origin_none = origin &&
 		pg_strcasecmp(origin, LOGICALREP_ORIGIN_NONE) == 0;
+	const char *query;
 
 	/*
 	 * Enable retain_dead_tuples checks only when origin is set to 'any',
@@ -2365,28 +2472,42 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	check_rdt = retain_dead_tuples && !origin_none;
 
 	/*
-	 * Enable table synchronization checks only when origin is 'none', to
-	 * ensure that data from other origins is not inadvertently copied.
+	 * Enable table and sequence synchronization checks only when origin is
+	 * 'none', to ensure that data from other origins is not inadvertently
+	 * copied.
 	 */
-	check_table_sync = copydata && origin_none;
+	check_sync = copydata && origin_none;
 
-	/* retain_dead_tuples and table sync checks occur separately */
-	Assert(!(check_rdt && check_table_sync));
+	/* retain_dead_tuples and data synchronization checks occur separately */
+	Assert(!(check_rdt && check_sync));
 
 	/* Return if no checks are required */
-	if (!check_rdt && !check_table_sync)
+	if (!check_rdt && !check_sync)
 		return;
 
 	initStringInfo(&cmd);
-	appendStringInfoString(&cmd,
-						   "SELECT DISTINCT P.pubname AS pubname\n"
-						   "FROM pg_publication P,\n"
-						   "     LATERAL pg_get_publication_tables(P.pubname) GPT\n"
-						   "     JOIN pg_subscription_rel PS ON (GPT.relid = PS.srrelid OR"
-						   "     GPT.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
-						   "                   SELECT relid FROM pg_partition_tree(PS.srrelid))),\n"
-						   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
-						   "WHERE C.oid = GPT.relid AND P.pubname IN (");
+
+	query = "SELECT DISTINCT P.pubname AS pubname\n"
+			"FROM pg_publication P,\n"
+			"     LATERAL %s GPR\n"
+			"     JOIN pg_subscription_rel PS ON (GPR.relid = PS.srrelid OR"
+			"     (GPR.istable AND"
+			"      GPR.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
+			"                    SELECT relid FROM pg_partition_tree(PS.srrelid)))),\n"
+			"     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
+			"WHERE C.oid = GPR.relid AND P.pubname IN (";
+
+	if (walrcv_server_version(wrconn) < 190000 || check_rdt)
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, TRUE as istable FROM pg_get_publication_tables(P.pubname))");
+	else if (only_sequences)
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, FALSE as istable FROM pg_get_publication_sequences(P.pubname))");
+	else
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, TRUE as istable FROM pg_get_publication_tables(P.pubname) UNION ALL"
+						 " SELECT relid, FALSE as istable FROM pg_get_publication_sequences(P.pubname))");
+
 	GetPublicationsStr(publications, &cmd, true);
 	appendStringInfoString(&cmd, ")\n");
 
@@ -2399,7 +2520,7 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	 * existing tables may now include changes from other origins due to newly
 	 * created subscriptions on the publisher.
 	 */
-	if (check_table_sync)
+	if (check_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
@@ -2455,11 +2576,11 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		/* Prepare the list of publication(s) for warning message. */
 		GetPublicationsStr(publist, pubnames, false);
 
-		if (check_table_sync)
+		if (check_sync || only_sequences)
 		{
 			appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
 							 subname);
-			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher tables did not come from other origins."));
+			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher relations did not come from other origins."));
 		}
 		else
 		{
@@ -2471,8 +2592,8 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		ereport(WARNING,
 				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 				errmsg_internal("%s", err_msg->data),
-				errdetail_plural("The subscription subscribes to a publication (%s) that contains tables that are written to by other subscriptions.",
-								 "The subscription subscribes to publications (%s) that contain tables that are written to by other subscriptions.",
+				errdetail_plural("The subscription subscribes to a publication (%s) that contains relations that are written to by other subscriptions.",
+								 "The subscription subscribes to publications (%s) that contain relations that are written to by other subscriptions.",
 								 list_length(publist), pubnames->data),
 				errhint_internal("%s", err_hint->data));
 	}
@@ -2594,8 +2715,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(PublicationRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2603,15 +2739,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		check_relkind = (server_version >= 190000);
+	int			column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2619,7 +2757,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2637,14 +2775,26 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
-						 "       FROM pg_class c\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs");
+		if (check_relkind)
+			appendStringInfo(&cmd, ", c.relkind\n");
+
+		appendStringInfo(&cmd, "   FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
 						 "                FROM pg_publication\n"
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2662,7 +2812,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2678,22 +2828,32 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char		relkind = RELKIND_RELATION;
+		PublicationRelKind *relinfo = palloc_object(PublicationRelKind);
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (check_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE &&
+			check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2701,7 +2861,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..3f61714ea7f 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1112,18 +1112,35 @@ CheckCmdReplicaIdentity(Relation rel, CmdType cmd)
 
 
 /*
- * Check if we support writing into specific relkind.
+ * Check if we support writing into specific relkind of local relation and check
+ * if it aligns with the relkind of the relation on the publisher.
  *
  * The nspname and relname are only needed for error reporting.
  */
 void
-CheckSubscriptionRelkind(char relkind, const char *nspname,
-						 const char *relname)
+CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+						 const char *nspname, const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (localrelkind != RELKIND_RELATION &&
+		localrelkind != RELKIND_PARTITIONED_TABLE &&
+		localrelkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
 						nspname, relname),
-				 errdetail_relkind_not_supported(relkind)));
+				 errdetail_relkind_not_supported(localrelkind)));
+
+	/*
+	 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be treated
+	 * interchangeably, but ensure that sequences (RELKIND_SEQUENCE) match
+	 * exactly on both publisher and subscriber.
+	 */
+	if ((localrelkind == RELKIND_SEQUENCE && remoterelkind != RELKIND_SEQUENCE) ||
+		(localrelkind != RELKIND_SEQUENCE && remoterelkind == RELKIND_SEQUENCE))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("relation \"%s.%s\" type mismatch: source \"%s\", target \"%s\"",
+					   nspname, relname,
+					   remoterelkind == RELKIND_SEQUENCE ? "sequence" : "table",
+					   localrelkind == RELKIND_SEQUENCE ? "sequence" : "table"));
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dc0c2886674..a4b29c822e8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10992,6 +10992,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c
index f59046ad620..37b799f0604 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -425,6 +425,7 @@ logicalrep_rel_open(LogicalRepRelId remoteid, LOCKMODE lockmode)
 
 		/* Check for supported relkind. */
 		CheckSubscriptionRelkind(entry->localrel->rd_rel->relkind,
+								 remoterel->relkind,
 								 remoterel->nspname, remoterel->relname);
 
 		/*
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e6da4028d39..d0284c8edd7 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -923,7 +923,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
@@ -1633,8 +1633,9 @@ FetchTableStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 419e478b4c6..88bef030ff4 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -3367,6 +3367,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 	 * at CREATE/ALTER SUBSCRIPTION would be insufficient.
 	 */
 	CheckSubscriptionRelkind(partrel->rd_rel->relkind,
+							 relmapentry->remoterel.relkind,
 							 get_namespace_name(RelationGetNamespace(partrel)),
 							 RelationGetRelationName(partrel));
 
@@ -3563,6 +3564,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 
 					/* Check that new partition also has supported relkind. */
 					CheckSubscriptionRelkind(partrel_new->rd_rel->relkind,
+											 relmapentry->remoterel.relkind,
 											 get_namespace_name(RelationGetNamespace(partrel_new)),
 											 RelationGetRelationName(partrel_new));
 
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 847806b0a2e..05cc7512520 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1137,9 +1137,9 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
-	 * need to check all the given publication-table mappings and report an
-	 * error if any publications have a different column list.
+	 * fetch_relation_list. But one can later change the publication so we
+	 * still need to check all the given publication-table mappings and report
+	 * an error if any publications have a different column list.
 	 */
 	foreach(lc, publications)
 	{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 64bfd309c9a..5e80f40e524 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2318,11 +2318,11 @@ match_previous_words(int pattern_id,
 	/* ALTER SUBSCRIPTION <name> */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny))
 		COMPLETE_WITH("CONNECTION", "ENABLE", "DISABLE", "OWNER TO",
-					  "RENAME TO", "REFRESH PUBLICATION", "SET", "SKIP (",
-					  "ADD PUBLICATION", "DROP PUBLICATION");
-	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
-	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+					  "RENAME TO", "REFRESH PUBLICATION", "REFRESH SEQUENCES",
+					  "SET", "SKIP (", "ADD PUBLICATION", "DROP PUBLICATION");
+	/* ALTER SUBSCRIPTION <name> REFRESH */
+	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH"))
+		COMPLETE_WITH("PUBLICATION", "SEQUENCES");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..0eb20e57260 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -22,6 +22,7 @@
 #include "catalog/genbki.h"
 #include "catalog/pg_subscription_rel_d.h"	/* IWYU pragma: export */
 #include "nodes/pg_list.h"
+#include "nodes/primnodes.h"
 
 /* ----------------
  *		pg_subscription_rel definition. cpp turns this into
@@ -90,7 +91,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionRelations(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 3248e78cd28..0ba86c2ad72 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -784,8 +784,8 @@ extern void ExecSimpleRelationDelete(ResultRelInfo *resultRelInfo,
 									 TupleTableSlot *searchslot);
 extern void CheckCmdReplicaIdentity(Relation rel, CmdType cmd);
 
-extern void CheckSubscriptionRelkind(char relkind, const char *nspname,
-									 const char *relname);
+extern void CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+									 const char *nspname, const char *relname);
 
 /*
  * prototypes from functions in nodeModifyTable.c
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4e445fe0cd7..ecbddd12e1b 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4362,6 +4362,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 5290b91e83e..b7c35372b48 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2905,6 +2905,7 @@ SubscriptingRef
 SubscriptingRefState
 Subscription
 SubscriptionInfo
+SubscriptionRelKind
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
 SupportRequestCost
-- 
2.51.0.windows.1

v20251014-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20251014-0003-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From d2aada140298c0921039cdc0c0eac79c25204cdc Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Sat, 11 Oct 2025 12:16:04 +0530
Subject: [PATCH v20251014 3/5] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 191 +++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 197 ++----------------
 src/backend/replication/logical/worker.c      |  22 +-
 src/bin/pg_dump/common.c                      |   2 +-
 src/bin/pg_dump/pg_dump.c                     |   4 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  14 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 12 files changed, 243 insertions(+), 199 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 78b03f0572b..94156513ddf 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -508,13 +508,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 33b7ec7f029..d27f6274188 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -970,7 +970,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..45b6d429558
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,191 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(table_states_not_ready);
+		table_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			table_states_not_ready = lappend(table_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * table_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (table_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index d0284c8edd7..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *table_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,7 +379,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,78 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch tables and sequences that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
-										   true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1756,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1774,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
@@ -1790,21 +1635,21 @@ AllTablesyncsReady(void)
 }
 
 /*
- * Return whether the subscription currently has any relations.
+ * Return whether the subscription currently has any tables.
  *
- * Note: Unlike HasSubscriptionRelations(), this function relies on cached
- * information for subscription relations. Additionally, it should not be
+ * Note: Unlike HasSubscriptionTables(), this function relies on cached
+ * information for subscription tables. Additionally, it should not be
  * invoked outside of apply or tablesync workers, as MySubscription must be
  * initialized first.
  */
 bool
-HasSubscriptionRelationsCached(void)
+HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 88bef030ff4..af810db45c5 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1243,7 +1243,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1365,7 +1365,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1421,7 +1421,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1487,7 +1487,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1622,7 +1622,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2464,7 +2464,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4135,7 +4135,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4625,7 +4625,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
 	 * concurrently add tables to the subscription, the apply worker may not
 	 * process invalidations in time. Consequently,
-	 * HasSubscriptionRelationsCached() might miss the new tables, leading to
+	 * HasSubscriptionTablesCached() might miss the new tables, leading to
 	 * premature advancement of oldest_nonremovable_xid.
 	 *
 	 * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
@@ -4639,7 +4639,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * subscription tables at this stage to prevent unnecessary tuple
 	 * retention.
 	 */
-	if (HasSubscriptionRelationsCached() && !AllTablesyncsReady())
+	if (HasSubscriptionTablesCached() && !AllTablesyncsReady())
 	{
 		TimestampTz now;
 
@@ -5878,7 +5878,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateRelationStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..bfd051cf198 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,7 +244,7 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
 
 	free(inhinfo);				/* not needed any longer */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 641bece12c7..931fd46bbe3 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5306,7 +5306,7 @@ getSubscriptions(Archive *fout)
 
 /*
  * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
@@ -5364,7 +5364,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 0eb20e57260..1d18b3e5635 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,7 +90,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
 									  bool get_sequences, bool not_ready);
 
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index de003802612..43d3a835cb2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *table_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -272,12 +274,16 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
-extern bool HasSubscriptionRelationsCached(void);
+extern bool HasSubscriptionTablesCached(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b7c35372b48..6b1189adeb1 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2923,7 +2923,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.51.0.windows.1

v20251014-0004-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20251014-0004-New-worker-for-sequence-synchronization-du.patchDownload
From 034b3a69abbdebb128fa06a46acaf7b2fbab2ca5 Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Tue, 14 Oct 2025 16:03:54 +0800
Subject: [PATCH v20251014 4/5] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 3 states:
   - INIT (needs synchronizing)
   - DATASYNC (needs re-synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  23 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  60 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 746 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 129 ++-
 src/backend/replication/logical/tablesync.c   | 102 +--
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   8 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 238 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1388 insertions(+), 180 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 94156513ddf..0a9ab03ca87 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..1f3ef004aa3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf46a543364..067c6c68ee8 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -955,8 +954,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1057,7 +1056,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1064,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1080,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1799,7 +1798,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index ff93515ca9b..d4c153a093d 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1057,7 +1057,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2085,7 +2085,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..99e6f566459 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,18 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
- *
- * We are only interested in the leader apply worker or table sync worker.
+ * subscription id, relid and type.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +267,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +328,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +417,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +507,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +639,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +712,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +844,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +911,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1288,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1625,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1665,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..870b172e52d
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,746 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state DATASYNC, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT or DATASYNC state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT / DATASYNC → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+	}
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 45b6d429558..4a3af2a8fca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -50,8 +50,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -66,14 +68,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -89,7 +103,48 @@ InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -97,6 +152,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 	switch (MyLogicalRepWorker->type)
 	{
 		case WORKERTYPE_PARALLEL_APPLY:
+
 			/*
 			 * Skip for parallel apply workers because they only operate on
 			 * tables that are in a READY state. See pa_can_start() and
@@ -109,7 +165,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 			break;
 
 		case WORKERTYPE_APPLY:
-			ProcessSyncingTablesForApply(current_lsn);
+			{
+				bool		has_pending_sequences = false;
+
+				/*
+				 * We need up-to-date sync state info for subscription tables
+				 * and sequences here.
+				 */
+				FetchRelationStates(&has_pending_sequences);
+				ProcessSyncingTablesForApply(current_lsn);
+				if (has_pending_sequences)
+					ProcessSyncingSequencesForApply();
+
+				break;
+			}
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -119,19 +192,25 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
-
-	*started_tx = false;
+	static bool has_subsequences_non_ready = false;
+	bool		started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
 	{
@@ -141,6 +220,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,7 +229,7 @@ FetchRelationStates(bool *started_tx)
 		if (!IsTransactionState())
 		{
 			StartTransactionCommand();
-			*started_tx = true;
+			started_tx = true;
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
@@ -162,7 +242,11 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready, rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -187,5 +271,14 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..bc0f7988a43 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 	Assert(!IsTransactionState());
 
-	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
-
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
@@ -413,6 +410,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +431,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +549,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
-
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1249,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1524,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1570,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1578,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1615,23 +1592,16 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
+	has_tables = FetchRelationStates(NULL);
 
 	/*
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1645,19 +1615,7 @@ AllTablesyncsReady(void)
 bool
 HasSubscriptionTablesCached(void)
 {
-	bool		started_tx;
-	bool		has_subrels;
-
-	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
-
-	if (started_tx)
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	return has_subrels;
+	return FetchRelationStates(NULL);
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index af810db45c5..8591ef144f9 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, false, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4134,7 +4157,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any tables that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5577,7 +5603,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5697,8 +5724,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5809,6 +5836,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5828,14 +5859,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5906,6 +5939,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5918,9 +5955,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..4da7298502e 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index b176d5130e4..42c118167ee 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1914,7 +1914,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b51d2b17379..8a2e1d1158a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 1d18b3e5635..5ec85507f56 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -83,6 +83,22 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..bcea652ef61 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..2db16bd7f84 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 43d3a835cb2..252a4228d5b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateRelationStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..c7bcc922ae8 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..ad96e616c02
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,238 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH SEQUENCES should cause sync of new sequences
+# of the publisher, and changes to existing sequences should also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6b1189adeb1..efe726af36c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1629,6 +1629,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.51.0.windows.1

#384Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#380)
RE: Logical Replication of sequences

On Monday, October 13, 2025 3:28 PM shveta malik <shveta.malik@gmail.com> wrote:

Please find few initial comments for 002:

Thanks for the comments.

5)
CheckSubscriptionRelkind
+ if (pubrelkind == '\0')
+ return;

Do you think, we shall write a comment in the function header that the caller
who wants to verify only the supported type should pass pubrelkind as '\0'?

This check is removed due to some other changes.

All other comments have been addressed in the latest version.

Best Regards,
Hou zj

#385Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#379)
RE: Logical Replication of sequences

On Monday, October 13, 2025 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Sat, Oct 11, 2025 at 7:42 PM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.

5.
AlterSubscription_refresh()
- sub_remove_rels[remove_rel_len].relid = relid;
- sub_remove_rels[remove_rel_len++].state = state;
…
- char originname[NAMEDATALEN];
+ SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+ rel->relid = relid;
+ rel->state = state;
+
+ sub_remove_rels = lappend(sub_remove_rels, rel);

Why do we change an offset based array into list? It looks slightly
odd that in the same function one of the other similar array
pubrel_local_oids is not converted when the above is converted. And
even if we do so, I don't think we need a retail free
(list_free_deep(sub_remove_rels);) as the memory allocation here is in
portal context which should be reset by end of the current statement
execution.

when sub_remove_rels was Array type, we used subrel_count as the initial size of
this array, but this count means the number of both the table and sequences. So
it allocates some unnecessary space for it, which is why we changed it to List.

Based on above, I kept the current style for now but removed the list_free.

All other comments have been addressed in the latest version.

Best Regards,
Hou zj

#386Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Peter Smith (#378)
RE: Logical Replication of sequences

On Monday, October 13, 2025 8:59 AM Peter Smith <smithpb2250@gmail.com> wrote:

HI Vignesh,

Here are some minor review comments for patches 0001 and 0002.

Thanks for the comments. I have addressed them in the latest version.

Best Regards,
Hou zj

#387Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#383)
Re: Logical Replication of sequences

On Tue, Oct 14, 2025 at 3:33 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

0003~0005:
Unchanged.

TODO:
* The latest comment from Shveta[5].
* The comment from Amit[6] to avoid creating slot/origin for sequence only subscription.

Few comments on 0003 and 0004 based on previous version of the patch.
As those are not changed, so I assume they apply for the new version
as well.

1. invalidate_syncing_table_states is changed to
InvalidateRelationStates. We could still retain syncing in it and name
it as InvalidateSyncingRelationStates

2.
- /* Process any tables that are being synchronized in parallel. */
+ /*
+ * Process any tables that are being synchronized in parallel and any
+ * newly added relations.
+ */
  ProcessSyncingRelations(commit_data.end_lsn);

pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)

in_remote_transaction = false;

- /* Process any tables that are being synchronized in parallel. */
+ /*
+ * Process any tables that are being synchronized in parallel and any
+ * newly added relations.
+ */
  ProcessSyncingRelations(prepare_data.end_lsn);

In the first line of comment, it is mentioned as tables and in the
second line, the relations are mentioned. I think as part of this it
can process sequences as well if any are added. I wonder whether this
(while applying prepare/commit) is the right time to invoke it for
sequences. The apply worker do need to invoke sequencesync worker if
required but not sure if this is the right place.

3.
@@ -378,9 +378,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)

Assert(!IsTransactionState());

- /* We need up-to-date sync state info for subscription tables here. */
- FetchRelationStates(&started_tx);

I think it is better to let FetchRelationStates be invoked from here
as it sets the context of further work and makes it easy to understand
the code flow. We can even do the same for
ProcessSyncingSequencesForApply().

4.
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel,
Oid localidxoid,
*/
LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
- InvalidOid, false);
+ InvalidOid, false, false);


extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+ LogicalRepWorkerType wtype,
bool only_running);

The third parameter is LogicalRepWorkerType and passing it false in
the above usage doesn't make sense. Also, we should update comments
atop logicalrep_worker_find as to why worker_type is required. I want
to know why subid+relid combination is not sufficient to identify the
workers uniquely.

--
With Regards,
Amit Kapila.

#388Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#387)
Re: Logical Replication of sequences

On Tue, Oct 14, 2025 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

0001 and 0002 looks good, except this duplicate version checking code below
in fetch_relation_list [1]+ bool check_relkind = (server_version >= 190000); + int column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;[2]+ /* From version 19, inclusion of sequences in the target is supported */ + if (server_version >= 190000) + appendStringInfo(&cmd, + "UNION ALL\n" + " SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind\n" + " FROM pg_catalog.pg_publication_sequences s\n" + " WHERE s.pubname IN (%s)", + pub_names->data);, I mean check_relkind and sequence fetching
both are related changes and start from version 19, so we can do a single
check. Instead of the 'check_relkind' variable name we can change it to
'support_relkind_seq' or something like that and then we can use this in
both checks.

[1]
+ bool check_relkind = (server_version >= 190000);
+ int column_count = check_columnlist ? (check_relkind ? 4 : 3) : 2;
[2]
+ /* From version 19, inclusion of sequences in the target is supported */
+ if (server_version >= 190000)
+ appendStringInfo(&cmd,
+ "UNION ALL\n"
+ "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS
attrs, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind\n"
+ "  FROM pg_catalog.pg_publication_sequences s\n"
+ "  WHERE s.pubname IN (%s)",
+ pub_names->data);

--
Regards,
Dilip Kumar
Google

#389Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#383)
RE: Logical Replication of sequences

Dear Hou,

Thanks for updating the patch. Here are comments for recent 0002.
Others are still being reviewed

01. pg_subscription_rel.h
```
+#include "nodes/primnodes.h"
```

The inclusion is not needed because the

02. typedefs.list
```
+SubscriptionRelKind
```

Missing update.

03. subscritioncmds.c
```
+#include "catalog/pg_sequence.h"
```

I could build without the inclusion. Can you remove?

04. check_publications_origin
```
+
+       query = "SELECT DISTINCT P.pubname AS pubname\n"
+                       "FROM pg_publication P,\n"
+                       "     LATERAL %s GPR\n"
...
```

pgindent does not like the notation. How aboout chaning the line after the "="?
I.e.,

```
query =
"SELECT DISTINCT P.pubname AS pubname\n"
"FROM pg_publication P,\n"
" LATERAL %s GPR\n"
...
```

05. AddSubscriptionRelState

```
if (HeapTupleIsValid(tup))
elog(ERROR, "subscription table %u in subscription %u already exists",
relid, subid);
```

Theoretically subid might be the sequence, right? Should we say "relation"
instead of "table" as well?

06. AlterSubscription_refresh_seq
```
+ /* Get local relation list. */
```

In contrast can we say "sequence"?

07. check_publications_origin
```
if (res->status != WALRCV_OK_TUPLES)
ereport(ERROR,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not receive list of replicated tables from the publisher: %s",
res->err)));
```

Should we say "relations" instead of "tables"? Similar lines are:

```
/* Process tables. */
...
* Log a warning if the publisher has subscribed to the same table from
```

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#390shveta malik
shveta.malik@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#389)
Re: Logical Replication of sequences

Please find a few more comments on 002:

1)
-   This catalog only contains tables known to the subscription after running
+   This catalog only contains tables and sequences known to the
subscription after running

Shall we get rid of 'only' now?

2)
+ * A single sequencesync worker synchronizes all sequences, so
+ * only stop workers when relation kind is not sequence.

This comment refers sequencesync worker which is in future patches. Is it okay?

3)
UpdateSubscriptionRelState

if (!HeapTupleIsValid(tup))
elog(ERROR, "subscription table %u in subscription %u
does not exist",
relid, subid);

table -->relation as AlterSubscription_refresh_seq() also invokes this.

4)
check_publications_origin :

if (res->status != WALRCV_OK_TUPLES)
ereport(ERROR,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not receive list of
replicated tables from the publisher: %s",
res->err)));

It could be sequences also.
We can either make it as 'replicated tables and/or sequences' or
simply 'replicated relations'

5)
fetch_relation_list:
Same here:
if (res->status != WALRCV_OK_TUPLES)
ereport(ERROR,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not receive list of
replicated tables from the publisher: %s",
res->err)));

6)
CreateSubscription:
/*
* Connect to remote side to execute requested commands and fetch table
* info.
*/

We can update this existing comment to mention sequences as well.

thanks
Shveta

#391shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#390)
Re: Logical Replication of sequences

Please find a few comments on 003:

1)
+#include "replication/logicallauncher.h"
+#include "replication/origin.h"
+#include "replication/slot.h"

syncutils.c compiles without these 3 inclusions.

2)
Should 'table_states_not_ready' be changed to
'relation_states_not_ready' as now it handles both tables and
sequences?

3)
invalidate_syncing_table_states has been changed to
InvalidateRelationStates. Shall we keep it as
InvalidateSyncingRelStates()?

4)
getSubscriptionTables:

- *   Get information about subscription membership for dumpable tables. This
+ *   Get information about subscription membership for dumpable relations. This

Is there a reason that we have changed the comment but not the function name?

thanks
Shveta

#392Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#388)
Re: Logical Replication of sequences

On Tue, Oct 14, 2025 at 5:08 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 14, 2025 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

0001 and 0002 looks good,

Thanks, I pushed 0001. I feel it is better to next commit refactoring
patch v20251014-0003-Reorganize-tablesync-Code-and-Introduce-sy as
that would be less controversial. What do you think?

except this duplicate version checking code below in fetch_relation_list [1][2], I mean check_relkind and sequence fetching both are related changes and start from version 19, so we can do a single check. Instead of the 'check_relkind' variable name we can change it to 'support_relkind_seq' or something like that and then we can use this in both checks.

Sounds reasonable to me.

--
With Regards,
Amit Kapila.

#393Chao Li
li.evan.chao@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#383)
Re: Logical Replication of sequences

I only reviewed 0003 as I saw Amit mentioned next should be 0003. Over LGTM, I just got one comment:

<v20251014-0005-Documentation-for-sequence-synchronization.patch><v20251014-0001-Update-ALTER-SUBSCRIPTION-REFRESH-to-ALTER.patch><v20251014-0002-Introduce-REFRESH-SEQUENCES-for-subscripti.patch><v20251014-0003-Reorganize-tablesync-Code-and-Introduce-sy.patch><v20251014-0004-New-worker-for-sequence-synchronization-du.patch>

In common.c:
```
-	pg_log_info("reading subscription membership of tables");
+	pg_log_info("reading subscription membership of relations");
 	getSubscriptionTables(fout);
```

0003 is replacing “table” with “relation” everywhere, I think that's because Sequence will be involved. In this place, why the comment is updated, but the function name is unchanged? Looking at the function comment of getSubscriptionTables():

/*
* getSubscriptionTables
* Get information about subscription membership for dumpable relations. This
* will be used only in binary-upgrade mode for PG17 or later versions.
*/
void
getSubscriptionTables(Archive *fout)

It also mentions “dumpable relations”. Should we update the function to use “relation” as well?

Best regards,
--
Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/

#394Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#392)
2 attachment(s)
RE: Logical Replication of sequences

On Wednesday, October 15, 2025 3:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 14, 2025 at 5:08 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Tue, Oct 14, 2025 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com>

wrote:

0001 and 0002 looks good,

Thanks, I pushed 0001. I feel it is better to next commit refactoring patch
v20251014-0003-Reorganize-tablesync-Code-and-Introduce-sy as that would
be less controversial. What do you think?

I agree and has reordered the latest patch set.

except this duplicate version checking code below in fetch_relation_list

[1][2], I mean check_relkind and sequence fetching both are related changes
and start from version 19, so we can do a single check. Instead of the
'check_relkind' variable name we can change it to 'support_relkind_seq' or
something like that and then we can use this in both checks.

Sounds reasonable to me.

Changed as suggested.

Here is the new version patch set which includes the following changes:

0001:
* Addressed comments from Shveta[1]/messages/by-id/CAJpy0uCSKChetEw1buBrZu3vAV8OYv3X9MygNxKHw5WWzMd1Gg@mail.gmail.com and Chao li[2]/messages/by-id/158C2EDB-D505-46A6-996D-296EC1B3ACE2@gmail.com.

0002
* Addressed Shveta[3]/messages/by-id/CAJpy0uCPuTLEkuC7kbXwvyUuxrtKOhDb9A3ti6EhOKgzMNkbcQ@mail.gmail.com, Kuroda-San[4]/messages/by-id/OSCPR01MB14966B8BB27B784674506C9A8F5EBA@OSCPR01MB14966.jpnprd01.prod.outlook.com and Dilip's comments[5]/messages/by-id/CAA4eK1+Hb4H9z8C5kiuc42=w=Pi9dQAioJW=2OSr9eAnpoxF6w@mail.gmail.com.

TODO:
* The comments on 0003 patch by Amit[6]/messages/by-id/CAA4eK1J8UYFPgcM5b0aHvbRT_3pVNUpnpvypQU5vqk4Uu=mXVg@mail.gmail.com.
* The comment from Amit[7]/messages/by-id/CAA4eK1J=gc8WXVc2Hy0Xcq4KtWU-z-dxBiZHbT62jz3QPBZ5CQ@mail.gmail.com to avoid creating slot/origin for sequence only subscription.

[1]: /messages/by-id/CAJpy0uCSKChetEw1buBrZu3vAV8OYv3X9MygNxKHw5WWzMd1Gg@mail.gmail.com
[2]: /messages/by-id/158C2EDB-D505-46A6-996D-296EC1B3ACE2@gmail.com
[3]: /messages/by-id/CAJpy0uCPuTLEkuC7kbXwvyUuxrtKOhDb9A3ti6EhOKgzMNkbcQ@mail.gmail.com
[4]: /messages/by-id/OSCPR01MB14966B8BB27B784674506C9A8F5EBA@OSCPR01MB14966.jpnprd01.prod.outlook.com
[5]: /messages/by-id/CAA4eK1+Hb4H9z8C5kiuc42=w=Pi9dQAioJW=2OSr9eAnpoxF6w@mail.gmail.com
[6]: /messages/by-id/CAA4eK1J8UYFPgcM5b0aHvbRT_3pVNUpnpvypQU5vqk4Uu=mXVg@mail.gmail.com
[7]: /messages/by-id/CAA4eK1J=gc8WXVc2Hy0Xcq4KtWU-z-dxBiZHbT62jz3QPBZ5CQ@mail.gmail.com

Best Regards,
Hou zj

Attachments:

v20251015-0002-Introduce-REFRESH-SEQUENCES-for-subscripti.patchapplication/octet-stream; name=v20251015-0002-Introduce-REFRESH-SEQUENCES-for-subscripti.patchDownload
From e3b71d721bfbadccbe6ffb221f4f79f1ca817a3c Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Wed, 15 Oct 2025 16:57:15 +0800
Subject: [PATCH v20251015 2/4] Introduce "REFRESH SEQUENCES" for subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH SEQUENCES
This command updates the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                  |   2 +-
 src/backend/catalog/pg_subscription.c       |  63 ++-
 src/backend/commands/subscriptioncmds.c     | 410 ++++++++++++++------
 src/backend/executor/execReplication.c      |  27 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/relation.c  |   1 +
 src/backend/replication/logical/syncutils.c |   3 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/logical/worker.c    |   2 +
 src/backend/replication/pgoutput/pgoutput.c |   6 +-
 src/bin/psql/tab-complete.in.c              |  10 +-
 src/include/catalog/pg_subscription_rel.h   |   3 +-
 src/include/executor/executor.h             |   4 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/tools/pgindent/typedefs.list            |   1 +
 15 files changed, 391 insertions(+), 153 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9b3aae8603b..1bfd8cae5ae 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8205,7 +8205,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
+   This catalog contains tables and sequences known to the subscription after running
    either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
    <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
    PUBLICATION</command></link>.
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..c615005c923 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -284,7 +284,7 @@ AddSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u already exists",
+		elog(ERROR, "subscription relation %u in subscription %u already exists",
 			 relid, subid);
 
 	/* Form the tuple. */
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
+			&& !get_tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 0f54686b699..5724080e0a8 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -106,12 +106,18 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+typedef struct PublicationRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} PublicationRelKind;
+
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
 									  Oid *subrel_local_oids, int subrel_count,
-									  char *subname);
+									  char *subname, bool only_sequences);
 static void check_pub_dead_tuple_retention(WalReceiverConn *wrconn);
 static void check_duplicates_in_publist(List *publist, Datum *datums);
 static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
@@ -736,20 +742,23 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * XXX: Currently, a replication origin is created for all subscriptions,
+	 * including those for sequence-only publications. However, this is
+	 * unnecessary, as incremental synchronization of sequences is not
+	 * supported.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
 	/*
 	 * Connect to remote side to execute requested commands and fetch table
-	 * info.
+	 * and sequence info.
 	 */
 	if (opts.connect)
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,10 +773,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *pubrels;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
-									  NULL, 0, stmt->subname);
+									  NULL, 0, stmt->subname, false);
 
 			if (opts.retaindeadtuples)
 				check_pub_dead_tuple_retention(wrconn);
@@ -776,25 +789,28 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			pubrels = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				RangeVar   *rv = pubrelinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
+				CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 										 rv->schemaname, rv->relname);
-
-				AddSubscriptionRelState(subid, relid, table_state,
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +818,11 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Currently, a replication slot is created for all
+			 * subscriptions, including those for sequence-only publications.
+			 * However, this is unnecessary, as incremental synchronization of
+			 * sequences is not supported.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +846,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +900,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +913,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +936,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -940,33 +961,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 		check_publications_origin(wrconn, sub->publications, copy_data,
 								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
+								  subrel_local_oids, subrel_count, sub->name,
+								  false);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known relations. If the relation is not known locally create a new
+		 * state for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = pubrelinfo->rv;
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+			relkind = get_rel_relkind(relid);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
+			CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 									 rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
@@ -978,28 +997,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1041,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * XXX Currently there is no sequencesync worker, so we only
+				 * stop tablesync workers.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1098,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,7 +1115,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
@@ -1097,6 +1131,58 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with INIT state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+	char	   *err = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("subscription \"%s\" could not connect to the publisher: %s",
+					   sub->name, err));
+
+	PG_TRY();
+	{
+		check_publications_origin(wrconn, sub->publications, false,
+								  sub->retaindeadtuples, sub->origin, NULL, 0,
+								  sub->name, true);
+
+		/* Get local sequence list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+		foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+		{
+			Oid			relid = subrel->relid;
+
+			UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+									   InvalidXLogRecPtr, false);
+			ereport(DEBUG1,
+					errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+									get_namespace_name(get_rel_namespace(relid)),
+									get_rel_name(relid),
+									sub->name));
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1733,6 +1819,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("%s is not allowed for disabled subscriptions",
+								   "ALTER SUBSCRIPTION ... REFRESH SEQUENCES"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1826,7 +1925,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 			check_publications_origin(wrconn, sub->publications, false,
 									  retain_dead_tuples, origin, NULL, 0,
-									  sub->name);
+									  sub->name, false);
 
 			if (update_failover || update_two_phase)
 				walrcv_alter_slot(wrconn, sub->slotname,
@@ -2008,7 +2107,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2318,17 +2417,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
 }
 
 /*
- * Check and log a warning if the publisher has subscribed to the same table,
- * its partition ancestors (if it's a partition), or its partition children (if
- * it's a partitioned table), from some other publishers. This check is
- * required in the following scenarios:
+ * Check and log a warning if the publisher has subscribed to the same relation
+ * (table or sequence), its partition ancestors (if it's a partition), or its
+ * partition children (if it's a partitioned table), from some other publishers.
+ * This check is required in the following scenarios:
  *
  * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
- *    - This check is skipped for tables already added, as incremental sync via
- *      WAL allows origin tracking. The list of such tables is in
- *      subrel_local_oids.
+ *    - This check is skipped for tables and sequences already added, as
+ *      incremental sync via WAL allows origin tracking. The list of such tables
+ *      is in subrel_local_oids.
  *
  * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "retain_dead_tuples = true" and "origin = any", and for
@@ -2338,13 +2437,19 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
  *      details for reliable conflict detection.
+ *    - This check targets for tables only.
  *    - See comments atop worker.c for more details.
+ *
+ * 3) For ALTER SUBSCRIPTION ... REFRESH SEQUENCE statements with "copy_data =
+ *    true" and "origin = none":
+ *    - Warn the user that sequence data from another origin might have been
+ *      copied.
  */
 static void
 check_publications_origin(WalReceiverConn *wrconn, List *publications,
 						  bool copydata, bool retain_dead_tuples,
 						  char *origin, Oid *subrel_local_oids,
-						  int subrel_count, char *subname)
+						  int subrel_count, char *subname, bool only_sequences)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
@@ -2353,9 +2458,10 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	List	   *publist = NIL;
 	int			i;
 	bool		check_rdt;
-	bool		check_table_sync;
+	bool		check_sync;
 	bool		origin_none = origin &&
 		pg_strcasecmp(origin, LOGICALREP_ORIGIN_NONE) == 0;
+	const char *query;
 
 	/*
 	 * Enable retain_dead_tuples checks only when origin is set to 'any',
@@ -2365,28 +2471,42 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	check_rdt = retain_dead_tuples && !origin_none;
 
 	/*
-	 * Enable table synchronization checks only when origin is 'none', to
-	 * ensure that data from other origins is not inadvertently copied.
+	 * Enable table and sequence synchronization checks only when origin is
+	 * 'none', to ensure that data from other origins is not inadvertently
+	 * copied.
 	 */
-	check_table_sync = copydata && origin_none;
+	check_sync = copydata && origin_none;
 
-	/* retain_dead_tuples and table sync checks occur separately */
-	Assert(!(check_rdt && check_table_sync));
+	/* retain_dead_tuples and data synchronization checks occur separately */
+	Assert(!(check_rdt && check_sync));
 
 	/* Return if no checks are required */
-	if (!check_rdt && !check_table_sync)
+	if (!check_rdt && !check_sync)
 		return;
 
 	initStringInfo(&cmd);
-	appendStringInfoString(&cmd,
-						   "SELECT DISTINCT P.pubname AS pubname\n"
-						   "FROM pg_publication P,\n"
-						   "     LATERAL pg_get_publication_tables(P.pubname) GPT\n"
-						   "     JOIN pg_subscription_rel PS ON (GPT.relid = PS.srrelid OR"
-						   "     GPT.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
-						   "                   SELECT relid FROM pg_partition_tree(PS.srrelid))),\n"
-						   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
-						   "WHERE C.oid = GPT.relid AND P.pubname IN (");
+
+	query = "SELECT DISTINCT P.pubname AS pubname\n"
+		"FROM pg_publication P,\n"
+		"     LATERAL %s GPR\n"
+		"     JOIN pg_subscription_rel PS ON (GPR.relid = PS.srrelid OR"
+		"     (GPR.istable AND"
+		"      GPR.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
+		"                    SELECT relid FROM pg_partition_tree(PS.srrelid)))),\n"
+		"     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
+		"WHERE C.oid = GPR.relid AND P.pubname IN (";
+
+	if (walrcv_server_version(wrconn) < 190000 || check_rdt)
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, TRUE as istable FROM pg_get_publication_tables(P.pubname))");
+	else if (only_sequences)
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, FALSE as istable FROM pg_get_publication_sequences(P.pubname))");
+	else
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, TRUE as istable FROM pg_get_publication_tables(P.pubname) UNION ALL"
+						 " SELECT relid, FALSE as istable FROM pg_get_publication_sequences(P.pubname))");
+
 	GetPublicationsStr(publications, &cmd, true);
 	appendStringInfoString(&cmd, ")\n");
 
@@ -2399,7 +2519,7 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	 * existing tables may now include changes from other origins due to newly
 	 * created subscriptions on the publisher.
 	 */
-	if (check_table_sync)
+	if (check_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
@@ -2418,10 +2538,10 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	if (res->status != WALRCV_OK_TUPLES)
 		ereport(ERROR,
 				(errcode(ERRCODE_CONNECTION_FAILURE),
-				 errmsg("could not receive list of replicated tables from the publisher: %s",
+				 errmsg("could not receive list of replicated relations from the publisher: %s",
 						res->err)));
 
-	/* Process tables. */
+	/* Process relations. */
 	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
 	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
 	{
@@ -2436,7 +2556,7 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	}
 
 	/*
-	 * Log a warning if the publisher has subscribed to the same table from
+	 * Log a warning if the publisher has subscribed to the same relation from
 	 * some other publisher. We cannot know the origin of data during the
 	 * initial sync. Data origins can be found only from the WAL by looking at
 	 * the origin id.
@@ -2455,11 +2575,11 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		/* Prepare the list of publication(s) for warning message. */
 		GetPublicationsStr(publist, pubnames, false);
 
-		if (check_table_sync)
+		if (check_sync || only_sequences)
 		{
 			appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
 							 subname);
-			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher tables did not come from other origins."));
+			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher relations did not come from other origins."));
 		}
 		else
 		{
@@ -2471,8 +2591,8 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		ereport(WARNING,
 				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 				errmsg_internal("%s", err_msg->data),
-				errdetail_plural("The subscription subscribes to a publication (%s) that contains tables that are written to by other subscriptions.",
-								 "The subscription subscribes to publications (%s) that contain tables that are written to by other subscriptions.",
+				errdetail_plural("The subscription subscribes to a publication (%s) that contains relations that are written to by other subscriptions.",
+								 "The subscription subscribes to publications (%s) that contain relations that are written to by other subscriptions.",
 								 list_length(publist), pubnames->data),
 				errhint_internal("%s", err_hint->data));
 	}
@@ -2594,8 +2714,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(PublicationRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2603,15 +2738,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		support_relkind_seq = (server_version >= 190000);
+	int			column_count = check_columnlist ? (support_relkind_seq ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2619,7 +2756,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2637,14 +2774,27 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
-						 "       FROM pg_class c\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs");
+
+		if (support_relkind_seq)
+			appendStringInfo(&cmd, ", c.relkind\n");
+
+		appendStringInfo(&cmd, "   FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
 						 "                FROM pg_publication\n"
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (support_relkind_seq)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2662,7 +2812,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2678,22 +2828,32 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char		relkind = RELKIND_RELATION;
+		PublicationRelKind *relinfo = palloc_object(PublicationRelKind);
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (support_relkind_seq)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE &&
+			check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2701,7 +2861,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..3f61714ea7f 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1112,18 +1112,35 @@ CheckCmdReplicaIdentity(Relation rel, CmdType cmd)
 
 
 /*
- * Check if we support writing into specific relkind.
+ * Check if we support writing into specific relkind of local relation and check
+ * if it aligns with the relkind of the relation on the publisher.
  *
  * The nspname and relname are only needed for error reporting.
  */
 void
-CheckSubscriptionRelkind(char relkind, const char *nspname,
-						 const char *relname)
+CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+						 const char *nspname, const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (localrelkind != RELKIND_RELATION &&
+		localrelkind != RELKIND_PARTITIONED_TABLE &&
+		localrelkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
 						nspname, relname),
-				 errdetail_relkind_not_supported(relkind)));
+				 errdetail_relkind_not_supported(localrelkind)));
+
+	/*
+	 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be treated
+	 * interchangeably, but ensure that sequences (RELKIND_SEQUENCE) match
+	 * exactly on both publisher and subscriber.
+	 */
+	if ((localrelkind == RELKIND_SEQUENCE && remoterelkind != RELKIND_SEQUENCE) ||
+		(localrelkind != RELKIND_SEQUENCE && remoterelkind == RELKIND_SEQUENCE))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("relation \"%s.%s\" type mismatch: source \"%s\", target \"%s\"",
+					   nspname, relname,
+					   remoterelkind == RELKIND_SEQUENCE ? "sequence" : "table",
+					   localrelkind == RELKIND_SEQUENCE ? "sequence" : "table"));
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dc0c2886674..a4b29c822e8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10992,6 +10992,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c
index f59046ad620..37b799f0604 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -425,6 +425,7 @@ logicalrep_rel_open(LogicalRepRelId remoteid, LOCKMODE lockmode)
 
 		/* Check for supported relkind. */
 		CheckSubscriptionRelkind(entry->localrel->rd_rel->relkind,
+								 remoterel->relkind,
 								 remoterel->nspname, remoterel->relname);
 
 		/*
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index c30de1275e7..7b59f3eb234 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -151,7 +151,8 @@ FetchRelationStates(bool *started_tx)
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 921395c2409..2f6c45339d6 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 3c58ad88476..d986ba2ea50 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -3367,6 +3367,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 	 * at CREATE/ALTER SUBSCRIPTION would be insufficient.
 	 */
 	CheckSubscriptionRelkind(partrel->rd_rel->relkind,
+							 relmapentry->remoterel.relkind,
 							 get_namespace_name(RelationGetNamespace(partrel)),
 							 RelationGetRelationName(partrel));
 
@@ -3563,6 +3564,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 
 					/* Check that new partition also has supported relkind. */
 					CheckSubscriptionRelkind(partrel_new->rd_rel->relkind,
+											 relmapentry->remoterel.relkind,
 											 get_namespace_name(RelationGetNamespace(partrel_new)),
 											 RelationGetRelationName(partrel_new));
 
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 847806b0a2e..05cc7512520 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1137,9 +1137,9 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
-	 * need to check all the given publication-table mappings and report an
-	 * error if any publications have a different column list.
+	 * fetch_relation_list. But one can later change the publication so we
+	 * still need to check all the given publication-table mappings and report
+	 * an error if any publications have a different column list.
 	 */
 	foreach(lc, publications)
 	{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 64bfd309c9a..5e80f40e524 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2318,11 +2318,11 @@ match_previous_words(int pattern_id,
 	/* ALTER SUBSCRIPTION <name> */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny))
 		COMPLETE_WITH("CONNECTION", "ENABLE", "DISABLE", "OWNER TO",
-					  "RENAME TO", "REFRESH PUBLICATION", "SET", "SKIP (",
-					  "ADD PUBLICATION", "DROP PUBLICATION");
-	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
-	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+					  "RENAME TO", "REFRESH PUBLICATION", "REFRESH SEQUENCES",
+					  "SET", "SKIP (", "ADD PUBLICATION", "DROP PUBLICATION");
+	/* ALTER SUBSCRIPTION <name> REFRESH */
+	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH"))
+		COMPLETE_WITH("PUBLICATION", "SEQUENCES");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..49deec052c6 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,7 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 3248e78cd28..0ba86c2ad72 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -784,8 +784,8 @@ extern void ExecSimpleRelationDelete(ResultRelInfo *resultRelInfo,
 									 TupleTableSlot *searchslot);
 extern void CheckCmdReplicaIdentity(Relation rel, CmdType cmd);
 
-extern void CheckSubscriptionRelkind(char relkind, const char *nspname,
-									 const char *relname);
+extern void CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+									 const char *nspname, const char *relname);
 
 /*
  * prototypes from functions in nodeModifyTable.c
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4e445fe0cd7..ecbddd12e1b 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4362,6 +4362,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ee1cab6190f..fb6a67d94e0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2362,6 +2362,7 @@ PublicationObjSpec
 PublicationObjSpecType
 PublicationPartOpt
 PublicationRelInfo
+PublicationRelKind
 PublicationSchemaInfo
 PublicationTable
 PublishGencolsType
-- 
2.31.1

v20251015-0001-Reorganize-tablesync-Code-and-Introduce-sy.patchapplication/octet-stream; name=v20251015-0001-Reorganize-tablesync-Code-and-Introduce-sy.patchDownload
From 4505b85e6357d2588e7e379724d47599ea4be0c2 Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Wed, 15 Oct 2025 16:30:23 +0800
Subject: [PATCH v20251015 1/4] Reorganize tablesync Code and Introduce
 syncutils

Reorganized the tablesync code by creating a new syncutils file.
This refactoring will facilitate the development of sequence
synchronization worker code.

This commit separates code reorganization from functional changes,
making it clearer to reviewers that only existing code has been moved.
The changes in this patch can be merged with subsequent patches during
the commit process.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   4 +-
 src/backend/replication/logical/Makefile      |   1 +
 .../replication/logical/applyparallelworker.c |   2 +-
 src/backend/replication/logical/meson.build   |   1 +
 src/backend/replication/logical/syncutils.c   | 188 ++++++++++++++++
 src/backend/replication/logical/tablesync.c   | 204 +++---------------
 src/backend/replication/logical/worker.c      |  22 +-
 src/bin/pg_dump/common.c                      |   4 +-
 src/bin/pg_dump/pg_dump.c                     |   8 +-
 src/bin/pg_dump/pg_dump.h                     |   2 +-
 src/include/catalog/pg_subscription_rel.h     |   2 +-
 src/include/replication/worker_internal.h     |  14 +-
 src/tools/pgindent/typedefs.list              |   2 +-
 13 files changed, 248 insertions(+), 206 deletions(-)
 create mode 100644 src/backend/replication/logical/syncutils.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index b885890de37..e06587b0265 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -506,13 +506,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 }
 
 /*
- * Does the subscription have any relations?
+ * Does the subscription have any tables?
  *
  * Use this function only to know true/false, and when you have no need for the
  * List returned by GetSubscriptionRelations.
  */
 bool
-HasSubscriptionRelations(Oid subid)
+HasSubscriptionTables(Oid subid)
 {
 	Relation	rel;
 	ScanKeyData skey[1];
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index 1e08bbbd4eb..c62c8c67521 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -28,6 +28,7 @@ OBJS = \
 	reorderbuffer.o \
 	slotsync.o \
 	snapbuild.o \
+	syncutils.o \
 	tablesync.o \
 	worker.o
 
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 33b7ec7f029..14325581afc 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -970,7 +970,7 @@ ParallelApplyWorkerMain(Datum main_arg)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateSyncingRelStates,
 								  (Datum) 0);
 
 	set_apply_error_context_origin(originname);
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 6f19614c79d..9283e996ef4 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -14,6 +14,7 @@ backend_sources += files(
   'reorderbuffer.c',
   'slotsync.c',
   'snapbuild.c',
+  'syncutils.c',
   'tablesync.c',
   'worker.c',
 )
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
new file mode 100644
index 00000000000..c30de1275e7
--- /dev/null
+++ b/src/backend/replication/logical/syncutils.c
@@ -0,0 +1,188 @@
+/*-------------------------------------------------------------------------
+ * syncutils.c
+ *	  PostgreSQL logical replication: common synchronization code
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/syncutils.c
+ *
+ * NOTES
+ *	  This file contains code common to table synchronization workers, and
+ *	  the sequence synchronization worker.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "catalog/pg_subscription_rel.h"
+#include "pgstat.h"
+#include "replication/worker_internal.h"
+#include "storage/ipc.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+
+/*
+ * Enum for phases of the subscription relations state.
+ *
+ * SYNC_RELATIONS_STATE_NEEDS_REBUILD indicates that the subscription relations
+ * state is no longer valid, and the subscription relations should be rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_REBUILD_STARTED indicates that the subscription
+ * relations state is being rebuilt.
+ *
+ * SYNC_RELATIONS_STATE_VALID indicates that the subscription relation state is
+ * up-to-date and valid.
+ */
+typedef enum
+{
+	SYNC_RELATIONS_STATE_NEEDS_REBUILD,
+	SYNC_RELATIONS_STATE_REBUILD_STARTED,
+	SYNC_RELATIONS_STATE_VALID,
+} SyncingRelationsState;
+
+static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+
+/*
+ * Exit routine for synchronization worker.
+ */
+pg_noreturn void
+FinishSyncWorker(void)
+{
+	/*
+	 * Commit any outstanding transaction. This is the usual case, unless
+	 * there was nothing to do for the table.
+	 */
+	if (IsTransactionState())
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	/* And flush all writes. */
+	XLogFlush(GetXLogWriteRecPtr());
+
+	StartTransactionCommand();
+	ereport(LOG,
+			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					MySubscription->name,
+					get_rel_name(MyLogicalRepWorker->relid))));
+	CommitTransactionCommand();
+
+	/* Find the leader apply worker and signal it. */
+	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+
+	/* Stop gracefully */
+	proc_exit(0);
+}
+
+/*
+ * Callback from syscache invalidation.
+ */
+void
+InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
+{
+	relation_states_validity = SYNC_RELATIONS_STATE_NEEDS_REBUILD;
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized.
+ */
+void
+ProcessSyncingRelations(XLogRecPtr current_lsn)
+{
+	switch (MyLogicalRepWorker->type)
+	{
+		case WORKERTYPE_PARALLEL_APPLY:
+
+			/*
+			 * Skip for parallel apply workers because they only operate on
+			 * tables that are in a READY state. See pa_can_start() and
+			 * should_apply_changes_for_rel().
+			 */
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			ProcessSyncingTablesForSync(current_lsn);
+			break;
+
+		case WORKERTYPE_APPLY:
+			ProcessSyncingTablesForApply(current_lsn);
+			break;
+
+		case WORKERTYPE_UNKNOWN:
+			/* Should never happen. */
+			elog(ERROR, "Unknown worker type");
+	}
+}
+
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ *
+ * Note: If this function started the transaction (indicated by the parameter)
+ * then it is the caller's responsibility to commit it.
+ */
+bool
+FetchRelationStates(bool *started_tx)
+{
+	static bool has_subtables = false;
+
+	*started_tx = false;
+
+	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
+	{
+		MemoryContext oldctx;
+		List	   *rstates;
+		ListCell   *lc;
+		SubscriptionRelState *rstate;
+
+		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+
+		/* Clean the old lists. */
+		list_free_deep(relation_states_not_ready);
+		relation_states_not_ready = NIL;
+
+		if (!IsTransactionState())
+		{
+			StartTransactionCommand();
+			*started_tx = true;
+		}
+
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+		foreach(lc, rstates)
+		{
+			rstate = palloc(sizeof(SubscriptionRelState));
+			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
+			relation_states_not_ready = lappend(relation_states_not_ready, rstate);
+		}
+		MemoryContextSwitchTo(oldctx);
+
+		/*
+		 * Does the subscription have tables?
+		 *
+		 * If there were not-READY tables found then we know it does. But if
+		 * relation_states_not_ready was empty we still need to check again to
+		 * see if there are 0 tables.
+		 */
+		has_subtables = (relation_states_not_ready != NIL) ||
+			HasSubscriptionTables(MySubscription->oid);
+
+		/*
+		 * If the subscription relation cache has been invalidated since we
+		 * entered this routine, we still use and return the relations we just
+		 * finished constructing, to avoid infinite loops, but we leave the
+		 * table states marked as stale so that we'll rebuild it again on next
+		 * access. Otherwise, we mark the table states as valid.
+		 */
+		if (relation_states_validity == SYNC_RELATIONS_STATE_REBUILD_STARTED)
+			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
+	}
+
+	return has_subtables;
+}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e6da4028d39..921395c2409 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -117,58 +117,15 @@
 #include "utils/array.h"
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
 #include "utils/rls.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-typedef enum
-{
-	SYNC_TABLE_STATE_NEEDS_REBUILD,
-	SYNC_TABLE_STATE_REBUILD_STARTED,
-	SYNC_TABLE_STATE_VALID,
-} SyncingTablesState;
-
-static SyncingTablesState table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-static List *table_states_not_ready = NIL;
-static bool FetchTableStates(bool *started_tx);
+List	   *relation_states_not_ready = NIL;
 
 static StringInfo copybuf = NULL;
 
-/*
- * Exit routine for synchronization worker.
- */
-pg_noreturn static void
-finish_sync_worker(void)
-{
-	/*
-	 * Commit any outstanding transaction. This is the usual case, unless
-	 * there was nothing to do for the table.
-	 */
-	if (IsTransactionState())
-	{
-		CommitTransactionCommand();
-		pgstat_report_stat(true);
-	}
-
-	/* And flush all writes. */
-	XLogFlush(GetXLogWriteRecPtr());
-
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
-
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
-
-	/* Stop gracefully */
-	proc_exit(0);
-}
-
 /*
  * Wait until the relation sync state is set in the catalog to the expected
  * one; return true when it happens.
@@ -180,7 +137,7 @@ finish_sync_worker(void)
  * CATCHUP state to SYNCDONE.
  */
 static bool
-wait_for_relation_state_change(Oid relid, char expected_state)
+wait_for_table_state_change(Oid relid, char expected_state)
 {
 	char		state;
 
@@ -273,15 +230,6 @@ wait_for_worker_state_change(char expected_state)
 	return false;
 }
 
-/*
- * Callback from syscache invalidation.
- */
-void
-invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
-{
-	table_states_validity = SYNC_TABLE_STATE_NEEDS_REBUILD;
-}
-
 /*
  * Handle table synchronization cooperation from the synchronization
  * worker.
@@ -290,8 +238,8 @@ invalidate_syncing_table_states(Datum arg, int cacheid, uint32 hashvalue)
  * predetermined synchronization point in the WAL stream, mark the table as
  * SYNCDONE and finish.
  */
-static void
-process_syncing_tables_for_sync(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 {
 	SpinLockAcquire(&MyLogicalRepWorker->relmutex);
 
@@ -349,9 +297,9 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 
 		/*
 		 * Start a new transaction to clean up the tablesync origin tracking.
-		 * This transaction will be ended within the finish_sync_worker().
-		 * Now, even, if we fail to remove this here, the apply worker will
-		 * ensure to clean it up afterward.
+		 * This transaction will be ended within the FinishSyncWorker(). Now,
+		 * even, if we fail to remove this here, the apply worker will ensure
+		 * to clean it up afterward.
 		 *
 		 * We need to do this after the table state is set to SYNCDONE.
 		 * Otherwise, if an error occurs while performing the database
@@ -387,7 +335,7 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		finish_sync_worker();
+		FinishSyncWorker();
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -414,8 +362,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
  * If the synchronization position is reached (SYNCDONE), then the table can
  * be marked as READY and is no longer tracked.
  */
-static void
-process_syncing_tables_for_apply(XLogRecPtr current_lsn)
+void
+ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 {
 	struct tablesync_start_time_mapping
 	{
@@ -431,14 +379,14 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchTableStates(&started_tx);
+	FetchRelationStates(&started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
 	 * immediate restarts.  We don't need it if there are no tables that need
 	 * syncing.
 	 */
-	if (table_states_not_ready != NIL && !last_start_times)
+	if (relation_states_not_ready != NIL && !last_start_times)
 	{
 		HASHCTL		ctl;
 
@@ -452,7 +400,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	 * Clean up the hash table when we're done with all tables (just to
 	 * release the bit of memory).
 	 */
-	else if (table_states_not_ready == NIL && last_start_times)
+	else if (relation_states_not_ready == NIL && last_start_times)
 	{
 		hash_destroy(last_start_times);
 		last_start_times = NULL;
@@ -461,7 +409,7 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	/*
 	 * Process all tables that are being synchronized.
 	 */
-	foreach(lc, table_states_not_ready)
+	foreach(lc, relation_states_not_ready)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
@@ -586,8 +534,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 					StartTransactionCommand();
 					started_tx = true;
 
-					wait_for_relation_state_change(rstate->relid,
-												   SUBREL_STATE_SYNCDONE);
+					wait_for_table_state_change(rstate->relid,
+												SUBREL_STATE_SYNCDONE);
 				}
 				else
 					LWLockRelease(LogicalRepWorkerLock);
@@ -689,37 +637,6 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)
 	}
 }
 
-/*
- * Process possible state change(s) of tables that are being synchronized.
- */
-void
-process_syncing_tables(XLogRecPtr current_lsn)
-{
-	switch (MyLogicalRepWorker->type)
-	{
-		case WORKERTYPE_PARALLEL_APPLY:
-
-			/*
-			 * Skip for parallel apply workers because they only operate on
-			 * tables that are in a READY state. See pa_can_start() and
-			 * should_apply_changes_for_rel().
-			 */
-			break;
-
-		case WORKERTYPE_TABLESYNC:
-			process_syncing_tables_for_sync(current_lsn);
-			break;
-
-		case WORKERTYPE_APPLY:
-			process_syncing_tables_for_apply(current_lsn);
-			break;
-
-		case WORKERTYPE_UNKNOWN:
-			/* Should never happen. */
-			elog(ERROR, "Unknown worker type");
-	}
-}
-
 /*
  * Create list of columns for COPY based on logical relation mapping.
  */
@@ -1356,7 +1273,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			finish_sync_worker();	/* doesn't return */
+			FinishSyncWorker(); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1599,77 +1516,6 @@ copy_table_done:
 	return slotname;
 }
 
-/*
- * Common code to fetch the up-to-date sync state info into the static lists.
- *
- * Returns true if subscription has 1 or more tables, else false.
- *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
- */
-static bool
-FetchTableStates(bool *started_tx)
-{
-	static bool has_subrels = false;
-
-	*started_tx = false;
-
-	if (table_states_validity != SYNC_TABLE_STATE_VALID)
-	{
-		MemoryContext oldctx;
-		List	   *rstates;
-		ListCell   *lc;
-		SubscriptionRelState *rstate;
-
-		table_states_validity = SYNC_TABLE_STATE_REBUILD_STARTED;
-
-		/* Clean the old lists. */
-		list_free_deep(table_states_not_ready);
-		table_states_not_ready = NIL;
-
-		if (!IsTransactionState())
-		{
-			StartTransactionCommand();
-			*started_tx = true;
-		}
-
-		/* Fetch all non-ready tables. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
-		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
-		}
-		MemoryContextSwitchTo(oldctx);
-
-		/*
-		 * Does the subscription have tables?
-		 *
-		 * If there were not-READY relations found then we know it does. But
-		 * if table_states_not_ready was empty we still need to check again to
-		 * see if there are 0 tables.
-		 */
-		has_subrels = (table_states_not_ready != NIL) ||
-			HasSubscriptionRelations(MySubscription->oid);
-
-		/*
-		 * If the subscription relation cache has been invalidated since we
-		 * entered this routine, we still use and return the relations we just
-		 * finished constructing, to avoid infinite loops, but we leave the
-		 * table states marked as stale so that we'll rebuild it again on next
-		 * access. Otherwise, we mark the table states as valid.
-		 */
-		if (table_states_validity == SYNC_TABLE_STATE_REBUILD_STARTED)
-			table_states_validity = SYNC_TABLE_STATE_VALID;
-	}
-
-	return has_subrels;
-}
-
 /*
  * Execute the initial sync with error handling. Disable the subscription,
  * if it's required.
@@ -1755,7 +1601,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	finish_sync_worker();
+	FinishSyncWorker();
 }
 
 /*
@@ -1773,7 +1619,7 @@ AllTablesyncsReady(void)
 	bool		has_subrels = false;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
@@ -1785,25 +1631,25 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_subrels && (relation_states_not_ready == NIL);
 }
 
 /*
- * Return whether the subscription currently has any relations.
+ * Return whether the subscription currently has any tables.
  *
- * Note: Unlike HasSubscriptionRelations(), this function relies on cached
- * information for subscription relations. Additionally, it should not be
+ * Note: Unlike HasSubscriptionTables(), this function relies on cached
+ * information for subscription tables. Additionally, it should not be
  * invoked outside of apply or tablesync workers, as MySubscription must be
  * initialized first.
  */
 bool
-HasSubscriptionRelationsCached(void)
+HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchTableStates(&started_tx);
+	has_subrels = FetchRelationStates(&started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 419e478b4c6..3c58ad88476 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -91,7 +91,7 @@
  * behave as if two_phase = off. When the apply worker detects that all
  * tablesyncs have become READY (while the tri-state was PENDING) it will
  * restart the apply worker process. This happens in
- * process_syncing_tables_for_apply.
+ * ProcessSyncingTablesForApply.
  *
  * When the (re-started) apply worker finds that all tablesyncs are READY for a
  * two_phase tri-state of PENDING it start streaming messages with the
@@ -1243,7 +1243,7 @@ apply_handle_commit(StringInfo s)
 	apply_handle_commit_internal(&commit_data);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1365,7 +1365,7 @@ apply_handle_prepare(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Since we have already prepared the transaction, in a case where the
@@ -1421,7 +1421,7 @@ apply_handle_commit_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
 
@@ -1487,7 +1487,7 @@ apply_handle_rollback_prepared(StringInfo s)
 	in_remote_transaction = false;
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(rollback_data.rollback_end_lsn);
+	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 	reset_apply_error_context_info();
@@ -1622,7 +1622,7 @@ apply_handle_stream_prepare(StringInfo s)
 	pgstat_report_stat(false);
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(prepare_data.end_lsn);
+	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
 	 * Similar to prepare case, the subskiplsn could be left in a case of
@@ -2464,7 +2464,7 @@ apply_handle_stream_commit(StringInfo s)
 	}
 
 	/* Process any tables that are being synchronized in parallel. */
-	process_syncing_tables(commit_data.end_lsn);
+	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
 
@@ -4133,7 +4133,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			maybe_reread_subscription();
 
 			/* Process any table synchronization changes. */
-			process_syncing_tables(last_received);
+			ProcessSyncingRelations(last_received);
 		}
 
 		/* Cleanup the memory. */
@@ -4623,7 +4623,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
 	 * concurrently add tables to the subscription, the apply worker may not
 	 * process invalidations in time. Consequently,
-	 * HasSubscriptionRelationsCached() might miss the new tables, leading to
+	 * HasSubscriptionTablesCached() might miss the new tables, leading to
 	 * premature advancement of oldest_nonremovable_xid.
 	 *
 	 * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
@@ -4637,7 +4637,7 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
 	 * subscription tables at this stage to prevent unnecessary tuple
 	 * retention.
 	 */
-	if (HasSubscriptionRelationsCached() && !AllTablesyncsReady())
+	if (HasSubscriptionTablesCached() && !AllTablesyncsReady())
 	{
 		TimestampTz now;
 
@@ -5876,7 +5876,7 @@ SetupApplyOrSyncWorker(int worker_slot)
 	 * the subscription relation state.
 	 */
 	CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
-								  invalidate_syncing_table_states,
+								  InvalidateSyncingRelStates,
 								  (Datum) 0);
 }
 
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index a1976fae607..4e7303ea631 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -244,8 +244,8 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
-	pg_log_info("reading subscription membership of tables");
-	getSubscriptionTables(fout);
+	pg_log_info("reading subscription membership of relations");
+	getSubscriptionRelations(fout);
 
 	free(inhinfo);				/* not needed any longer */
 
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 641bece12c7..890db7b08c2 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5305,12 +5305,12 @@ getSubscriptions(Archive *fout)
 }
 
 /*
- * getSubscriptionTables
- *	  Get information about subscription membership for dumpable tables. This
+ * getSubscriptionRelations
+ *	  Get information about subscription membership for dumpable relations. This
  *    will be used only in binary-upgrade mode for PG17 or later versions.
  */
 void
-getSubscriptionTables(Archive *fout)
+getSubscriptionRelations(Archive *fout)
 {
 	DumpOptions *dopt = fout->dopt;
 	SubscriptionInfo *subinfo = NULL;
@@ -5364,7 +5364,7 @@ getSubscriptionTables(Archive *fout)
 
 		tblinfo = findTableByOid(relid);
 		if (tblinfo == NULL)
-			pg_fatal("failed sanity check, table with OID %u not found",
+			pg_fatal("failed sanity check, relation with OID %u not found",
 					 relid);
 
 		/* OK, make a DumpableObject for this relationship */
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index fa6d1a510f7..72a00e1bc20 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -829,6 +829,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
-extern void getSubscriptionTables(Archive *fout);
+extern void getSubscriptionRelations(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 02f97a547dd..61b63c6bb7a 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -89,7 +89,7 @@ extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
-extern bool HasSubscriptionRelations(Oid subid);
+extern bool HasSubscriptionTables(Oid subid);
 extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index de003802612..5f11c4de217 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -251,6 +251,8 @@ extern PGDLLIMPORT bool in_remote_transaction;
 
 extern PGDLLIMPORT bool InitializingApplyWorker;
 
+extern PGDLLIMPORT List *relation_states_not_ready;
+
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
 												bool only_running);
@@ -272,12 +274,16 @@ extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
 
 extern bool AllTablesyncsReady(void);
-extern bool HasSubscriptionRelationsCached(void);
+extern bool HasSubscriptionTablesCached(void);
 extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
-extern void process_syncing_tables(XLogRecPtr current_lsn);
-extern void invalidate_syncing_table_states(Datum arg, int cacheid,
-											uint32 hashvalue);
+extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
+extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+
+pg_noreturn extern void FinishSyncWorker(void);
+extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
+extern bool FetchRelationStates(bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 5290b91e83e..ee1cab6190f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2922,7 +2922,7 @@ SyncRepStandbyData
 SyncRequestHandler
 SyncRequestType
 SyncStandbySlotsConfigData
-SyncingTablesState
+SyncingRelationsState
 SysFKRelationship
 SysScanDesc
 SyscacheCallbackFunction
-- 
2.31.1

#395Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#390)
RE: Logical Replication of sequences

On Wednesday, October 15, 2025 12:12 PM shveta malik <shveta.malik@gmail.com> wrote:

Please find a few more comments on 002:

...

Please find a few comments on 003:

...

Thanks for the comments, I have addressed them in latest version.

Best Regards,
Hou zj

#396Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Hayato Kuroda (Fujitsu) (#389)
RE: Logical Replication of sequences

On Tuesday, October 14, 2025 8:09 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:

Dear Hou,

Thanks for updating the patch. Here are comments for recent 0002.
Others are still being reviewed

Thanks for the comments.

04. check_publications_origin
```
+
+       query = "SELECT DISTINCT P.pubname AS pubname\n"
+                       "FROM pg_publication P,\n"
+                       "     LATERAL %s GPR\n"
...
pgindent does not like the notation. How aboout chaning the line after the "="?
I.e.,

```
query =
"SELECT DISTINCT P.pubname AS pubname\n"
"FROM pg_publication P,\n"
" LATERAL %s GPR\n"
...
```

I chose to accept what pgindent suggests instead of doing more adjustments.

All other comments have been addressed in the latest version.

Best Regards,
Hou zj

#397Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Chao Li (#393)
RE: Logical Replication of sequences

On Wednesday, October 15, 2025 4:28 PM Chao Li <li.evan.chao@gmail.com> wrote:

/*
* getSubscriptionTables
* Get information about subscription membership for dumpable relations. This
* will be used only in binary-upgrade mode for PG17 or later versions.
*/
void
getSubscriptionTables(Archive *fout)

It also mentions “dumpable relations”. Should we update the function to use
“relation” as well?

Thanks for the comments ! It has been addressed in the latest version.

(BTW, it would be highly appreciated if you can send emails in plain text format.
The current email is in HTML mode, which can make inline replies a bit
challenging. Plain text is also the traditional style preferred by the community :) .)

Best Regards,
Hou zj

#398Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#394)
Re: Logical Replication of sequences

On Wed, Oct 15, 2025 at 4:51 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

Here is the new version patch set which includes the following changes:

I have pushed the first patch. Kindly rebase and share the remaining patches.

--
With Regards,
Amit Kapila.

#399Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#398)
3 attachment(s)
RE: Logical Replication of sequences

On Thursday, October 16, 2025 5:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 15, 2025 at 4:51 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

Here is the new version patch set which includes the following changes:

I have pushed the first patch. Kindly rebase and share the remaining patches.

Thanks! Here is the remaining patches, which addressed all pending comments.

Regarding whether we can avoid creating slot/origin for seq-only publication.
I think the main challenge lies in ensuring the apply worker operates smoothly
without a replication slot. Currently, the apply worker uses the
START_REPLICATION command with a replication slot to acquire the slot on the
publisher. To bypass this, it's essential to skip starting the replication and
specifically, avoid entering the LogicalRepApplyLoop().

To address this, I thought to implement a separate loop dedicated to
sequence-only subscriptions. Within this loop, the apply worker would only call
functions like ProcessSyncingSequencesForApply() to manage sequence
synchronization while periodically checking for any new tables added to the
subscription. If new tables are detected, the apply worker would exit this loop
and enter the LogicalRepApplyLoop().

I chose not to consider allowing the START_REPLICATION command to operate
without a logical slot, as it seems like an unconventional approach requiring
modifications in walsender and to skip logical decoding and related processes.

Another consideration is whether to address scenarios where tables are
subsequently removed from the subscription, given that slots and origins would
already have been created in such cases.

Since it might introduce addition complexity to the patches, and considering
that we already allow slot/origin to be created for empty subscription, it might
also be acceptable to allow it to be created for sequence-only subscription. So,
I chose to add some comments to explain the reason for it in latest version.

Origin case might be slightly easier to handle, but it could also require some
amount of implementations. Since origin is less harmful than a replication slot
and maintaining it does not have noticeable overhead, it seems OK to me to
retain the current behaviour and add some comments in the patch to clarify the
same.

Best Regards,
Hou zj

Attachments:

v20251016-0003-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20251016-0003-Documentation-for-sequence-synchronization.patchDownload
From 2b5b5583f9c733b0d50ec6d2fca26b48aa7449bc Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 7 Oct 2025 20:41:56 +0530
Subject: [PATCH v20251016 3/3] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |  30 ++-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 239 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  62 +++++-
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 359 insertions(+), 45 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 51534a6f88c..082444a4d51 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8199,16 +8199,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog contains tables and sequences known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8242,7 +8245,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8251,12 +8254,21 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a2a8b49fdb..9d54f8b26ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..548aab31960 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index dc4fc29466d..954ca320331 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2041,8 +2041,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2188,6 +2189,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..d2c9f84699d 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,51 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>
+
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +222,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.51.0.windows.1

v20251016-0002-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20251016-0002-New-worker-for-sequence-synchronization-du.patchDownload
From e0bd9f806cf4c5264cb5543e7136e5af30272c5f Mon Sep 17 00:00:00 2001
From: Hou Zhijie <houzj.fnst@fujitsu.com>
Date: Wed, 15 Oct 2025 19:00:16 +0800
Subject: [PATCH v20251016 2/3] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 3 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  23 +-
 src/backend/commands/subscriptioncmds.c       |   8 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  63 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 759 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 106 ++-
 src/backend/replication/logical/tablesync.c   |  85 +-
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   8 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/meson.build             |   1 +
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 238 ++++++
 src/tools/pgindent/typedefs.list              |   2 +
 26 files changed, 1392 insertions(+), 156 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index c615005c923..153a2da6940 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..1f3ef004aa3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf46a543364..067c6c68ee8 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -955,8 +954,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1057,7 +1056,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1064,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1080,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1799,7 +1798,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 5dd4bbc1c20..021d4e0ac69 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1050,8 +1050,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				RemoveSubscriptionRel(sub->oid, relid);
 
 				/*
-				 * XXX Currently there is no sequencesync worker, so we only
-				 * stop tablesync workers.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
 				if (relkind != RELKIND_SEQUENCE)
 				{
@@ -1062,7 +1062,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2090,7 +2090,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..f6a1c85fdb0 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,23 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
+ * subscription id, relid and type.
  *
- * We are only interested in the leader apply worker or table sync worker.
+ * For both apply workers and sequence sync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables and sequences. For table
+ * sync workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +272,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +333,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +422,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +512,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -630,13 +644,13 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +717,7 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +849,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +916,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1293,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1630,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1670,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..7bfcc08a6b1
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,759 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state DATASYNC, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT or DATASYNC state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT / DATASYNC → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(&has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Reports discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+
+		StringInfo	seqstr = makeStringInfo();
+		StringInfo	cmd = makeStringInfo();
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		destroyStringInfo(seqstr);
+		destroyStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing, ",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+	}
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+	/* XOR combine */
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 46d88838894..19b875427ca 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -46,8 +47,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,14 +65,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -85,7 +100,48 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -107,6 +163,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -116,18 +178,26 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences is declared as static, since the
+	 * same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
+	*has_pending_sequences = false;
 	*started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
@@ -138,6 +208,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -159,7 +230,12 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready,
+													rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -184,5 +260,7 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..8543d6c279d 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -375,11 +375,12 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	bool		started_tx = false;
 	bool		should_exit = false;
 	Relation	rel = NULL;
+	bool		has_pending_sequences;
 
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -413,6 +414,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +435,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +481,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +553,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1253,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1528,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1574,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1582,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1616,10 +1597,11 @@ bool
 AllTablesyncsReady(void)
 {
 	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
+	bool		has_pending_sequences;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_tables = FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	if (started_tx)
 	{
@@ -1631,7 +1613,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1647,9 +1629,10 @@ HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
+	bool		has_pending_sequences;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_subrels = FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d986ba2ea50..bae32c5645c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any relations that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any relations that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any relations that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any relations that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any relations that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any relations that are being synchronized in parallel and any
+	 * newly added relations.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4134,7 +4157,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel and
+			 * any newly added relations.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5577,7 +5603,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5697,8 +5724,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5809,6 +5836,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5828,14 +5859,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5906,6 +5939,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5918,9 +5955,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..4da7298502e 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b51d2b17379..8a2e1d1158a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 49deec052c6..88772a22b80 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,22 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..bcea652ef61 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..2db16bd7f84 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index ae352f6e691..a7c6588999f 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..c7bcc922ae8 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..ad96e616c02
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,238 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+    -- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH SEQUENCES should cause sync of new sequences
+# of the publisher, and changes to existing sequences should also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+    ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
+);
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index fb6a67d94e0..072f39292ec 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1629,6 +1629,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.51.0.windows.1

v20251016-0001-Introduce-REFRESH-SEQUENCES-for-subscripti.patchapplication/octet-stream; name=v20251016-0001-Introduce-REFRESH-SEQUENCES-for-subscripti.patchDownload
From 43b8a1aa5daaa495c9d6423a5604090431a7392e Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Wed, 15 Oct 2025 16:57:15 +0800
Subject: [PATCH v20251016] Introduce "REFRESH SEQUENCES" for subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH SEQUENCES
This command updates the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                  |   2 +-
 src/backend/catalog/pg_subscription.c       |  63 ++-
 src/backend/commands/subscriptioncmds.c     | 418 ++++++++++++++------
 src/backend/executor/execReplication.c      |  27 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/relation.c  |   1 +
 src/backend/replication/logical/syncutils.c |   3 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/logical/worker.c    |   2 +
 src/backend/replication/pgoutput/pgoutput.c |   6 +-
 src/bin/psql/tab-complete.in.c              |  10 +-
 src/include/catalog/pg_subscription_rel.h   |   3 +-
 src/include/executor/executor.h             |   4 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/tools/pgindent/typedefs.list            |   1 +
 15 files changed, 399 insertions(+), 153 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9b3aae8603b..51534a6f88c 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8205,7 +8205,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
+   This catalog contains tables and sequences known to the subscription after running
    either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
    <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
    PUBLICATION</command></link>.
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..c615005c923 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -284,7 +284,7 @@ AddSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u already exists",
+		elog(ERROR, "subscription relation %u in subscription %u already exists",
 			 relid, subid);
 
 	/* Form the tuple. */
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * leave tablesync slots or origins in the system when the
 		 * corresponding table is dropped.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +519,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subrels = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,8 +532,23 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
@@ -542,12 +560,21 @@ HasSubscriptionTables(Oid subid)
 /*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +583,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +608,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
+			&& !get_tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 0f54686b699..174c930e4bb 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -106,12 +106,18 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+typedef struct PublicationRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} PublicationRelKind;
+
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
 									  Oid *subrel_local_oids, int subrel_count,
-									  char *subname);
+									  char *subname, bool only_sequences);
 static void check_pub_dead_tuple_retention(WalReceiverConn *wrconn);
 static void check_duplicates_in_publist(List *publist, Datum *datums);
 static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
@@ -736,20 +742,27 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * Currently, a replication origin is created for all subscriptions,
+	 * including those for empty or sequence-only publications. While this is
+	 * unnecessary, optimizing it would necessitate additional checks to skip
+	 * creating origin in DDL operations and apply workers, along with
+	 * handling the creation of the origin once tables are added to the
+	 * subscription. Given that such subscriptions are infrequent and
+	 * maintaining an origin incurs minimal cost, it doesn't seem to be worth
+	 * doing anything about it.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
 	/*
 	 * Connect to remote side to execute requested commands and fetch table
-	 * info.
+	 * and sequence info.
 	 */
 	if (opts.connect)
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,10 +777,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *pubrels;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
-									  NULL, 0, stmt->subname);
+									  NULL, 0, stmt->subname, false);
 
 			if (opts.retaindeadtuples)
 				check_pub_dead_tuple_retention(wrconn);
@@ -776,25 +793,28 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			pubrels = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				RangeVar   *rv = pubrelinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
+				CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 										 rv->schemaname, rv->relname);
-
-				AddSubscriptionRelState(subid, relid, table_state,
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +822,15 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * Currently, a replication slot is created for all subscriptions,
+			 * including those for empty or sequence-only publications. While
+			 * this is unnecessary, optimizing this behavior would require
+			 * additional handling to ensure the apply worker operates
+			 * smoothly without acquiring a slot on the publisher, thus adding
+			 * complexity to the apply worker. Given that such subscriptions
+			 * are infrequent, it doesn't seem to be worth doing anything
+			 * about it.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +854,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +908,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +921,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +944,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -940,33 +969,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 		check_publications_origin(wrconn, sub->publications, copy_data,
 								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
+								  subrel_local_oids, subrel_count, sub->name,
+								  false);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known relations. If the relation is not known locally create a new
+		 * state for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = pubrelinfo->rv;
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+			relkind = get_rel_relkind(relid);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
+			CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 									 rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
@@ -978,28 +1005,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1049,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * XXX Currently there is no sequencesync worker, so we only
+				 * stop tablesync workers.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1106,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,7 +1123,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
@@ -1097,6 +1139,58 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with INIT state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+	char	   *err = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("subscription \"%s\" could not connect to the publisher: %s",
+					   sub->name, err));
+
+	PG_TRY();
+	{
+		check_publications_origin(wrconn, sub->publications, false,
+								  sub->retaindeadtuples, sub->origin, NULL, 0,
+								  sub->name, true);
+
+		/* Get local sequence list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+		foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+		{
+			Oid			relid = subrel->relid;
+
+			UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+									   InvalidXLogRecPtr, false);
+			ereport(DEBUG1,
+					errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+									get_namespace_name(get_rel_namespace(relid)),
+									get_rel_name(relid),
+									sub->name));
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1733,6 +1827,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("%s is not allowed for disabled subscriptions",
+								   "ALTER SUBSCRIPTION ... REFRESH SEQUENCES"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1826,7 +1933,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 			check_publications_origin(wrconn, sub->publications, false,
 									  retain_dead_tuples, origin, NULL, 0,
-									  sub->name);
+									  sub->name, false);
 
 			if (update_failover || update_two_phase)
 				walrcv_alter_slot(wrconn, sub->slotname,
@@ -2008,7 +2115,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2318,17 +2425,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
 }
 
 /*
- * Check and log a warning if the publisher has subscribed to the same table,
- * its partition ancestors (if it's a partition), or its partition children (if
- * it's a partitioned table), from some other publishers. This check is
- * required in the following scenarios:
+ * Check and log a warning if the publisher has subscribed to the same relation
+ * (table or sequence), its partition ancestors (if it's a partition), or its
+ * partition children (if it's a partitioned table), from some other publishers.
+ * This check is required in the following scenarios:
  *
  * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
- *    - This check is skipped for tables already added, as incremental sync via
- *      WAL allows origin tracking. The list of such tables is in
- *      subrel_local_oids.
+ *    - This check is skipped for tables and sequences already added, as
+ *      incremental sync via WAL allows origin tracking. The list of such tables
+ *      is in subrel_local_oids.
  *
  * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "retain_dead_tuples = true" and "origin = any", and for
@@ -2338,13 +2445,19 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
  *      details for reliable conflict detection.
+ *    - This check targets for tables only.
  *    - See comments atop worker.c for more details.
+ *
+ * 3) For ALTER SUBSCRIPTION ... REFRESH SEQUENCE statements with "copy_data =
+ *    true" and "origin = none":
+ *    - Warn the user that sequence data from another origin might have been
+ *      copied.
  */
 static void
 check_publications_origin(WalReceiverConn *wrconn, List *publications,
 						  bool copydata, bool retain_dead_tuples,
 						  char *origin, Oid *subrel_local_oids,
-						  int subrel_count, char *subname)
+						  int subrel_count, char *subname, bool only_sequences)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
@@ -2353,9 +2466,10 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	List	   *publist = NIL;
 	int			i;
 	bool		check_rdt;
-	bool		check_table_sync;
+	bool		check_sync;
 	bool		origin_none = origin &&
 		pg_strcasecmp(origin, LOGICALREP_ORIGIN_NONE) == 0;
+	const char *query;
 
 	/*
 	 * Enable retain_dead_tuples checks only when origin is set to 'any',
@@ -2365,28 +2479,42 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	check_rdt = retain_dead_tuples && !origin_none;
 
 	/*
-	 * Enable table synchronization checks only when origin is 'none', to
-	 * ensure that data from other origins is not inadvertently copied.
+	 * Enable table and sequence synchronization checks only when origin is
+	 * 'none', to ensure that data from other origins is not inadvertently
+	 * copied.
 	 */
-	check_table_sync = copydata && origin_none;
+	check_sync = copydata && origin_none;
 
-	/* retain_dead_tuples and table sync checks occur separately */
-	Assert(!(check_rdt && check_table_sync));
+	/* retain_dead_tuples and data synchronization checks occur separately */
+	Assert(!(check_rdt && check_sync));
 
 	/* Return if no checks are required */
-	if (!check_rdt && !check_table_sync)
+	if (!check_rdt && !check_sync)
 		return;
 
 	initStringInfo(&cmd);
-	appendStringInfoString(&cmd,
-						   "SELECT DISTINCT P.pubname AS pubname\n"
-						   "FROM pg_publication P,\n"
-						   "     LATERAL pg_get_publication_tables(P.pubname) GPT\n"
-						   "     JOIN pg_subscription_rel PS ON (GPT.relid = PS.srrelid OR"
-						   "     GPT.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
-						   "                   SELECT relid FROM pg_partition_tree(PS.srrelid))),\n"
-						   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
-						   "WHERE C.oid = GPT.relid AND P.pubname IN (");
+
+	query = "SELECT DISTINCT P.pubname AS pubname\n"
+		"FROM pg_publication P,\n"
+		"     LATERAL %s GPR\n"
+		"     JOIN pg_subscription_rel PS ON (GPR.relid = PS.srrelid OR"
+		"     (GPR.istable AND"
+		"      GPR.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
+		"                    SELECT relid FROM pg_partition_tree(PS.srrelid)))),\n"
+		"     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
+		"WHERE C.oid = GPR.relid AND P.pubname IN (";
+
+	if (walrcv_server_version(wrconn) < 190000 || check_rdt)
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, TRUE as istable FROM pg_get_publication_tables(P.pubname))");
+	else if (only_sequences)
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, FALSE as istable FROM pg_get_publication_sequences(P.pubname))");
+	else
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, TRUE as istable FROM pg_get_publication_tables(P.pubname) UNION ALL"
+						 " SELECT relid, FALSE as istable FROM pg_get_publication_sequences(P.pubname))");
+
 	GetPublicationsStr(publications, &cmd, true);
 	appendStringInfoString(&cmd, ")\n");
 
@@ -2399,7 +2527,7 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	 * existing tables may now include changes from other origins due to newly
 	 * created subscriptions on the publisher.
 	 */
-	if (check_table_sync)
+	if (check_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
@@ -2418,10 +2546,10 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	if (res->status != WALRCV_OK_TUPLES)
 		ereport(ERROR,
 				(errcode(ERRCODE_CONNECTION_FAILURE),
-				 errmsg("could not receive list of replicated tables from the publisher: %s",
+				 errmsg("could not receive list of replicated relations from the publisher: %s",
 						res->err)));
 
-	/* Process tables. */
+	/* Process relations. */
 	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
 	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
 	{
@@ -2436,7 +2564,7 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	}
 
 	/*
-	 * Log a warning if the publisher has subscribed to the same table from
+	 * Log a warning if the publisher has subscribed to the same relation from
 	 * some other publisher. We cannot know the origin of data during the
 	 * initial sync. Data origins can be found only from the WAL by looking at
 	 * the origin id.
@@ -2455,11 +2583,11 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		/* Prepare the list of publication(s) for warning message. */
 		GetPublicationsStr(publist, pubnames, false);
 
-		if (check_table_sync)
+		if (check_sync || only_sequences)
 		{
 			appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
 							 subname);
-			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher tables did not come from other origins."));
+			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher relations did not come from other origins."));
 		}
 		else
 		{
@@ -2471,8 +2599,8 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		ereport(WARNING,
 				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 				errmsg_internal("%s", err_msg->data),
-				errdetail_plural("The subscription subscribes to a publication (%s) that contains tables that are written to by other subscriptions.",
-								 "The subscription subscribes to publications (%s) that contain tables that are written to by other subscriptions.",
+				errdetail_plural("The subscription subscribes to a publication (%s) that contains relations that are written to by other subscriptions.",
+								 "The subscription subscribes to publications (%s) that contain relations that are written to by other subscriptions.",
 								 list_length(publist), pubnames->data),
 				errhint_internal("%s", err_hint->data));
 	}
@@ -2594,8 +2722,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(PublicationRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2603,15 +2746,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		support_relkind_seq = (server_version >= 190000);
+	int			column_count = check_columnlist ? (support_relkind_seq ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2619,7 +2764,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2637,14 +2782,27 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
-						 "       FROM pg_class c\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs");
+
+		if (support_relkind_seq)
+			appendStringInfo(&cmd, ", c.relkind\n");
+
+		appendStringInfo(&cmd, "   FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
 						 "                FROM pg_publication\n"
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (support_relkind_seq)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2662,7 +2820,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2678,22 +2836,32 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char		relkind = RELKIND_RELATION;
+		PublicationRelKind *relinfo = palloc_object(PublicationRelKind);
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (support_relkind_seq)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE &&
+			check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2701,7 +2869,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..3f61714ea7f 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1112,18 +1112,35 @@ CheckCmdReplicaIdentity(Relation rel, CmdType cmd)
 
 
 /*
- * Check if we support writing into specific relkind.
+ * Check if we support writing into specific relkind of local relation and check
+ * if it aligns with the relkind of the relation on the publisher.
  *
  * The nspname and relname are only needed for error reporting.
  */
 void
-CheckSubscriptionRelkind(char relkind, const char *nspname,
-						 const char *relname)
+CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+						 const char *nspname, const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (localrelkind != RELKIND_RELATION &&
+		localrelkind != RELKIND_PARTITIONED_TABLE &&
+		localrelkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
 						nspname, relname),
-				 errdetail_relkind_not_supported(relkind)));
+				 errdetail_relkind_not_supported(localrelkind)));
+
+	/*
+	 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be treated
+	 * interchangeably, but ensure that sequences (RELKIND_SEQUENCE) match
+	 * exactly on both publisher and subscriber.
+	 */
+	if ((localrelkind == RELKIND_SEQUENCE && remoterelkind != RELKIND_SEQUENCE) ||
+		(localrelkind != RELKIND_SEQUENCE && remoterelkind == RELKIND_SEQUENCE))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("relation \"%s.%s\" type mismatch: source \"%s\", target \"%s\"",
+					   nspname, relname,
+					   remoterelkind == RELKIND_SEQUENCE ? "sequence" : "table",
+					   localrelkind == RELKIND_SEQUENCE ? "sequence" : "table"));
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dc0c2886674..a4b29c822e8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10992,6 +10992,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c
index f59046ad620..37b799f0604 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -425,6 +425,7 @@ logicalrep_rel_open(LogicalRepRelId remoteid, LOCKMODE lockmode)
 
 		/* Check for supported relkind. */
 		CheckSubscriptionRelkind(entry->localrel->rd_rel->relkind,
+								 remoterel->relkind,
 								 remoterel->nspname, remoterel->relname);
 
 		/*
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 1bb3ca01db0..46d88838894 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -150,7 +150,8 @@ FetchRelationStates(bool *started_tx)
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2ba12517e93..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 3c58ad88476..d986ba2ea50 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -3367,6 +3367,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 	 * at CREATE/ALTER SUBSCRIPTION would be insufficient.
 	 */
 	CheckSubscriptionRelkind(partrel->rd_rel->relkind,
+							 relmapentry->remoterel.relkind,
 							 get_namespace_name(RelationGetNamespace(partrel)),
 							 RelationGetRelationName(partrel));
 
@@ -3563,6 +3564,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 
 					/* Check that new partition also has supported relkind. */
 					CheckSubscriptionRelkind(partrel_new->rd_rel->relkind,
+											 relmapentry->remoterel.relkind,
 											 get_namespace_name(RelationGetNamespace(partrel_new)),
 											 RelationGetRelationName(partrel_new));
 
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 847806b0a2e..05cc7512520 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1137,9 +1137,9 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
-	 * need to check all the given publication-table mappings and report an
-	 * error if any publications have a different column list.
+	 * fetch_relation_list. But one can later change the publication so we
+	 * still need to check all the given publication-table mappings and report
+	 * an error if any publications have a different column list.
 	 */
 	foreach(lc, publications)
 	{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ad37f9f6ed0..fa08059671b 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2319,11 +2319,11 @@ match_previous_words(int pattern_id,
 	/* ALTER SUBSCRIPTION <name> */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny))
 		COMPLETE_WITH("CONNECTION", "ENABLE", "DISABLE", "OWNER TO",
-					  "RENAME TO", "REFRESH PUBLICATION", "SET", "SKIP (",
-					  "ADD PUBLICATION", "DROP PUBLICATION");
-	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
-	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+					  "RENAME TO", "REFRESH PUBLICATION", "REFRESH SEQUENCES",
+					  "SET", "SKIP (", "ADD PUBLICATION", "DROP PUBLICATION");
+	/* ALTER SUBSCRIPTION <name> REFRESH */
+	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH"))
+		COMPLETE_WITH("PUBLICATION", "SEQUENCES");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..49deec052c6 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,7 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 3248e78cd28..0ba86c2ad72 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -784,8 +784,8 @@ extern void ExecSimpleRelationDelete(ResultRelInfo *resultRelInfo,
 									 TupleTableSlot *searchslot);
 extern void CheckCmdReplicaIdentity(Relation rel, CmdType cmd);
 
-extern void CheckSubscriptionRelkind(char relkind, const char *nspname,
-									 const char *relname);
+extern void CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+									 const char *nspname, const char *relname);
 
 /*
  * prototypes from functions in nodeModifyTable.c
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4e445fe0cd7..ecbddd12e1b 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4362,6 +4362,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ee1cab6190f..fb6a67d94e0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2362,6 +2362,7 @@ PublicationObjSpec
 PublicationObjSpecType
 PublicationPartOpt
 PublicationRelInfo
+PublicationRelKind
 PublicationSchemaInfo
 PublicationTable
 PublishGencolsType
-- 
2.31.1

#400Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#387)
RE: Logical Replication of sequences

On Tuesday, October 14, 2025 6:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 14, 2025 at 3:33 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

0003~0005:
Unchanged.

TODO:
* The latest comment from Shveta[5].
* The comment from Amit[6] to avoid creating slot/origin for sequence only

subscription.

Few comments on 0003 and 0004 based on previous version of the patch.
As those are not changed, so I assume they apply for the new version
as well.

Thanks for the comments.

2.
- /* Process any tables that are being synchronized in parallel. */
+ /*
+ * Process any tables that are being synchronized in parallel and any
+ * newly added relations.
+ */
ProcessSyncingRelations(commit_data.end_lsn);

pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)

in_remote_transaction = false;

- /* Process any tables that are being synchronized in parallel. */
+ /*
+ * Process any tables that are being synchronized in parallel and any
+ * newly added relations.
+ */
ProcessSyncingRelations(prepare_data.end_lsn);

In the first line of comment, it is mentioned as tables and in the
second line, the relations are mentioned. I think as part of this it
can process sequences as well if any are added. I wonder whether this
(while applying prepare/commit) is the right time to invoke it for
sequences. The apply worker do need to invoke sequencesync worker if
required but not sure if this is the right place.

I agree that we can start sequence sync worker at any point regardless of the
transaction boundary, because we do not support incremental seq sync, so another
approach could be to call ProcessSyncingSequencesForApply() in the main loop of
LogicalRepApplyLoop(), similar to maybe_advance_nonremovable_xid().

OTOH, I think the current implementation also works, because we have one
ProcessSyncingRelations() call in the main loop as well.

What do you think ?

4.
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel,
Oid localidxoid,
*/
LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
- InvalidOid, false);
+ InvalidOid, false, false);


extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+ LogicalRepWorkerType wtype,
bool only_running);

The third parameter is LogicalRepWorkerType and passing it false in
the above usage doesn't make sense. Also, we should update comments
atop logicalrep_worker_find as to why worker_type is required. I want
to know why subid+relid combination is not sufficient to identify the
workers uniquely.

I think it's because sequencesync worker also do not have a valid relid, similar
to the apply worker, so we would need the worker type to distinguish them. I
added some general comments atop of the function in the latest version.

Best Regards,
Hou zj

#401Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#400)
Re: Logical Replication of sequences

On Thu, Oct 16, 2025 at 4:54 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

On Tuesday, October 14, 2025 6:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

2.
- /* Process any tables that are being synchronized in parallel. */
+ /*
+ * Process any tables that are being synchronized in parallel and any
+ * newly added relations.
+ */
ProcessSyncingRelations(commit_data.end_lsn);

pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)

in_remote_transaction = false;

- /* Process any tables that are being synchronized in parallel. */
+ /*
+ * Process any tables that are being synchronized in parallel and any
+ * newly added relations.
+ */
ProcessSyncingRelations(prepare_data.end_lsn);

In the first line of comment, it is mentioned as tables and in the
second line, the relations are mentioned. I think as part of this it
can process sequences as well if any are added. I wonder whether this
(while applying prepare/commit) is the right time to invoke it for
sequences. The apply worker do need to invoke sequencesync worker if
required but not sure if this is the right place.

I agree that we can start sequence sync worker at any point regardless of the
transaction boundary, because we do not support incremental seq sync, so another
approach could be to call ProcessSyncingSequencesForApply() in the main loop of
LogicalRepApplyLoop(), similar to maybe_advance_nonremovable_xid().

OTOH, I think the current implementation also works, because we have one
ProcessSyncingRelations() call in the main loop as well.

Yeah, the current implementation also appears good, but let's change
the comment to: "Process any tables that are being synchronized in
parallel, as well as any newly added tables or sequences."

What do you think ?

4.
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel,
Oid localidxoid,
*/
LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
- InvalidOid, false);
+ InvalidOid, false, false);


extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+ LogicalRepWorkerType wtype,
bool only_running);

The third parameter is LogicalRepWorkerType and passing it false in
the above usage doesn't make sense. Also, we should update comments
atop logicalrep_worker_find as to why worker_type is required. I want
to know why subid+relid combination is not sufficient to identify the
workers uniquely.

I think it's because sequencesync worker also do not have a valid relid, similar
to the apply worker, so we would need the worker type to distinguish them. I
added some general comments atop of the function in the latest version.

Thanks, the added comments make the change easy to understand.

--
With Regards,
Amit Kapila.

#402Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#399)
Re: Logical Replication of sequences

On Thu, Oct 16, 2025 at 4:53 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

Regarding whether we can avoid creating slot/origin for seq-only publication.
I think the main challenge lies in ensuring the apply worker operates smoothly
without a replication slot. Currently, the apply worker uses the
START_REPLICATION command with a replication slot to acquire the slot on the
publisher. To bypass this, it's essential to skip starting the replication and
specifically, avoid entering the LogicalRepApplyLoop().

To address this, I thought to implement a separate loop dedicated to
sequence-only subscriptions. Within this loop, the apply worker would only call
functions like ProcessSyncingSequencesForApply() to manage sequence
synchronization while periodically checking for any new tables added to the
subscription. If new tables are detected, the apply worker would exit this loop
and enter the LogicalRepApplyLoop().

I chose not to consider allowing the START_REPLICATION command to operate
without a logical slot, as it seems like an unconventional approach requiring
modifications in walsender and to skip logical decoding and related processes.

Another consideration is whether to address scenarios where tables are
subsequently removed from the subscription, given that slots and origins would
already have been created in such cases.

Since it might introduce addition complexity to the patches, and considering
that we already allow slot/origin to be created for empty subscription, it might
also be acceptable to allow it to be created for sequence-only subscription. So,
I chose to add some comments to explain the reason for it in latest version.

Origin case might be slightly easier to handle, but it could also require some
amount of implementations. Since origin is less harmful than a replication slot
and maintaining it does not have noticeable overhead, it seems OK to me to
retain the current behaviour and add some comments in the patch to clarify the
same.

I agree that avoiding to create a slot/origin for sequence-only
subscription is not worth the additional complexity at other places,
especially when we do create them for empty subscriptions.

--
With Regards,
Amit Kapila.

#403shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#402)
Re: Logical Replication of sequences

On Fri, Oct 17, 2025 at 10:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 16, 2025 at 4:53 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

Regarding whether we can avoid creating slot/origin for seq-only publication.
I think the main challenge lies in ensuring the apply worker operates smoothly
without a replication slot. Currently, the apply worker uses the
START_REPLICATION command with a replication slot to acquire the slot on the
publisher. To bypass this, it's essential to skip starting the replication and
specifically, avoid entering the LogicalRepApplyLoop().

To address this, I thought to implement a separate loop dedicated to
sequence-only subscriptions. Within this loop, the apply worker would only call
functions like ProcessSyncingSequencesForApply() to manage sequence
synchronization while periodically checking for any new tables added to the
subscription. If new tables are detected, the apply worker would exit this loop
and enter the LogicalRepApplyLoop().

I chose not to consider allowing the START_REPLICATION command to operate
without a logical slot, as it seems like an unconventional approach requiring
modifications in walsender and to skip logical decoding and related processes.

Another consideration is whether to address scenarios where tables are
subsequently removed from the subscription, given that slots and origins would
already have been created in such cases.

Since it might introduce addition complexity to the patches, and considering
that we already allow slot/origin to be created for empty subscription, it might
also be acceptable to allow it to be created for sequence-only subscription. So,
I chose to add some comments to explain the reason for it in latest version.

Origin case might be slightly easier to handle, but it could also require some
amount of implementations. Since origin is less harmful than a replication slot
and maintaining it does not have noticeable overhead, it seems OK to me to
retain the current behaviour and add some comments in the patch to clarify the
same.

I agree that avoiding to create a slot/origin for sequence-only
subscription is not worth the additional complexity at other places,
especially when we do create them for empty subscriptions.

+1.

While testeing 001 patch alone, I found that for sequence-only
subscription, we get error in tablesync worker :
ERROR: relation "public.seq1" type mismatch: source "table", target "sequence"

This error comes because during copy_table(),
logicalrep_relmap_update() does not update relkind and thus later
CheckSubscriptionRelkind() ends up giving the above error.

thanks
Shveta

#404Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#399)
Re: Logical Replication of sequences

On Thu, Oct 16, 2025 at 4:53 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

Thanks! Here is the remaining patches, which addressed all pending comments.

Few comments on 0001/0003:
========================
1.
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
  * leave tablesync slots or origins in the system when the
  * corresponding table is dropped.
  */
- if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+ if (!OidIsValid(subid) &&
+ get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+ subrel->srsubstate != SUBREL_STATE_READY)
  {

Here, why don't we allow sequence rel to be removed? Please add some comments.

2.
/*
  * Get the relations for the subscription.
  *
- * If not_ready is true, return only the relations that are not in a ready
- * state, otherwise return all the relations of the subscription.  The
- * returned list is palloc'ed in the current memory context.
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
+ * The returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+ bool not_ready)

The existing code comments (without the patch) are good enough for this change.

3. Move catalogs.sgml, alter_subscription.sgml (parts related to 0001)
from 0003 to 0001. Also, see if anything else can be moved.

4.
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
+     <para>
+      Also, fetch missing sequence information from the publisher.
+     </para>

The second para should be merged into the first one: Fetch missing
table and sequence information from the publisher.

5.
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>d</literal> = re-synchronize,
+       <literal>r</literal> = ready
</para></entry>

I think we need to remove the 'd' state from here as the patch 0001
changes always update the sequence state to init.

--
With Regards,
Amit Kapila.

#405Chao Li
li.evan.chao@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#399)
Re: Logical Replication of sequences

I just reviewed 0001 and got a few comments wrt code comments. I may find some time to review 0002 and 0003 next week.

On Oct 16, 2025, at 19:23, Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com> wrote:

<v20251016-0003-Documentation-for-sequence-synchronization.patch><v20251016-0002-New-worker-for-sequence-synchronization-du.patch><v20251016-0001-Introduce-REFRESH-SEQUENCES-for-subscripti.patch>

1 - 0001 - pg_subscription.c
```
+		/*
+		 * Skip sequence tuples. If even a single table tuple exists then the
+		 * subscription has tables.
+		 */
+		if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+			get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subrels = true;
+			break;
+		}
```

The comment "If even a single table tuple exists then the subscription has tables” sounds redundant. I know it’s inherited from the old code, but now, with the “break” you newly added, the code logic is simple and clear, so I think the comment is no longer needed.

2 - 0001  - pg_subscription.c
```
@@ -542,12 +560,21 @@ HasSubscriptionTables(Oid subid)
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
```

This function comment sounds a bit verbose and repetitive. Suggested revision:
```
* get_tables: if true, include tables in the returned list.
* get_sequences: if true, include sequences in the returned list.
* not_ready: if true, include only objects that have not reached the READY state;
* if false, include all objects of the requested type(s).
```

3 - 0001 - subscriptioncmds.c
```
+			 * Currently, a replication slot is created for all subscriptions,
+			 * including those for empty or sequence-only publications. While
+			 * this is unnecessary, optimizing this behavior would require
+			 * additional handling to ensure the apply worker operates
+			 * smoothly without acquiring a slot on the publisher, thus adding
+			 * complexity to the apply worker. Given that such subscriptions
+			 * are infrequent, it doesn't seem to be worth doing anything
+			 * about it.
```

Minor tweaks:
* "optimizing this behavior” -> “optimizing it”
* “doing anything about it” -> “addressing it"

4 - 0001 - subscriptioncmds.c
```
* 3) For ALTER SUBSCRIPTION ... REFRESH SEQUENCE statements with "copy_data =
* true" and "origin = none":
* - Warn the user that sequence data from another origin might have been
* copied.
```

“Warn the user” -> “Warn users"

Best regards,
--
Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/

#406Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#399)
Re: Logical Replication of sequences

On Thu, Oct 16, 2025 at 4:53 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>
wrote:

On Thursday, October 16, 2025 5:59 PM Amit Kapila <amit.kapila16@gmail.com>
wrote:

While reading through the patch, I have 2 comments in 0002

1.
+ ereport(LOG,
+ errmsg("logical replication sequence synchronization for subscription
\"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched,
%d insufficient permission, %d missing, ",
+   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) +
1, batch_size,
+   batch_succeeded_count, batch_skipped_count, batch_mismatched_count,
batch_insuffperm_count,
+   batch_size - (batch_succeeded_count + batch_skipped_count +
batch_mismatched_count + batch_insuffperm_count)));
+

The log message is ending with ..." %d missing, " (a trailing comma and
space).

2.
Also IMHO instead of just saying "missing" we can say "missing on/from
publisher" so that it would be more clear.

--
Regards,
Dilip Kumar
Google

#407Masahiko Sawada
sawada.mshk@gmail.com
In reply to: shveta malik (#403)
Re: Logical Replication of sequences

On Fri, Oct 17, 2025 at 1:35 AM shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Oct 17, 2025 at 10:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 16, 2025 at 4:53 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

Regarding whether we can avoid creating slot/origin for seq-only publication.
I think the main challenge lies in ensuring the apply worker operates smoothly
without a replication slot. Currently, the apply worker uses the
START_REPLICATION command with a replication slot to acquire the slot on the
publisher. To bypass this, it's essential to skip starting the replication and
specifically, avoid entering the LogicalRepApplyLoop().

To address this, I thought to implement a separate loop dedicated to
sequence-only subscriptions. Within this loop, the apply worker would only call
functions like ProcessSyncingSequencesForApply() to manage sequence
synchronization while periodically checking for any new tables added to the
subscription. If new tables are detected, the apply worker would exit this loop
and enter the LogicalRepApplyLoop().

I chose not to consider allowing the START_REPLICATION command to operate
without a logical slot, as it seems like an unconventional approach requiring
modifications in walsender and to skip logical decoding and related processes.

Another consideration is whether to address scenarios where tables are
subsequently removed from the subscription, given that slots and origins would
already have been created in such cases.

Since it might introduce addition complexity to the patches, and considering
that we already allow slot/origin to be created for empty subscription, it might
also be acceptable to allow it to be created for sequence-only subscription. So,
I chose to add some comments to explain the reason for it in latest version.

Origin case might be slightly easier to handle, but it could also require some
amount of implementations. Since origin is less harmful than a replication slot
and maintaining it does not have noticeable overhead, it seems OK to me to
retain the current behaviour and add some comments in the patch to clarify the
same.

I agree that avoiding to create a slot/origin for sequence-only
subscription is not worth the additional complexity at other places,
especially when we do create them for empty subscriptions.

+1.

While testeing 001 patch alone, I found that for sequence-only
subscription, we get error in tablesync worker :
ERROR: relation "public.seq1" type mismatch: source "table", target "sequence"

This error comes because during copy_table(),
logicalrep_relmap_update() does not update relkind and thus later
CheckSubscriptionRelkind() ends up giving the above error.

I faced the same error while reviewing the 0001 patch. I think if
we're going to push these patches separately the 0001 patch should
have at least minimal regression tests. Otherwise, I'm concerned that
buildfarm animals won't complain but we could end up blocking other
logical replication developments.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#408Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Masahiko Sawada (#407)
Re: Logical Replication of sequences

On Fri, Oct 17, 2025 at 12:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Fri, Oct 17, 2025 at 1:35 AM shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Oct 17, 2025 at 10:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 16, 2025 at 4:53 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

Regarding whether we can avoid creating slot/origin for seq-only publication.
I think the main challenge lies in ensuring the apply worker operates smoothly
without a replication slot. Currently, the apply worker uses the
START_REPLICATION command with a replication slot to acquire the slot on the
publisher. To bypass this, it's essential to skip starting the replication and
specifically, avoid entering the LogicalRepApplyLoop().

To address this, I thought to implement a separate loop dedicated to
sequence-only subscriptions. Within this loop, the apply worker would only call
functions like ProcessSyncingSequencesForApply() to manage sequence
synchronization while periodically checking for any new tables added to the
subscription. If new tables are detected, the apply worker would exit this loop
and enter the LogicalRepApplyLoop().

I chose not to consider allowing the START_REPLICATION command to operate
without a logical slot, as it seems like an unconventional approach requiring
modifications in walsender and to skip logical decoding and related processes.

Another consideration is whether to address scenarios where tables are
subsequently removed from the subscription, given that slots and origins would
already have been created in such cases.

Since it might introduce addition complexity to the patches, and considering
that we already allow slot/origin to be created for empty subscription, it might
also be acceptable to allow it to be created for sequence-only subscription. So,
I chose to add some comments to explain the reason for it in latest version.

Origin case might be slightly easier to handle, but it could also require some
amount of implementations. Since origin is less harmful than a replication slot
and maintaining it does not have noticeable overhead, it seems OK to me to
retain the current behaviour and add some comments in the patch to clarify the
same.

I agree that avoiding to create a slot/origin for sequence-only
subscription is not worth the additional complexity at other places,
especially when we do create them for empty subscriptions.

+1.

While testeing 001 patch alone, I found that for sequence-only
subscription, we get error in tablesync worker :
ERROR: relation "public.seq1" type mismatch: source "table", target "sequence"

This error comes because during copy_table(),
logicalrep_relmap_update() does not update relkind and thus later
CheckSubscriptionRelkind() ends up giving the above error.

I faced the same error while reviewing the 0001 patch. I think if
we're going to push these patches separately the 0001 patch should
have at least minimal regression tests. Otherwise, I'm concerned that
buildfarm animals won't complain but we could end up blocking other
logical replication developments.

One minor comment for 0001 patch is:

+       /*
+        * Skip sequence tuples. If even a single table tuple exists then the
+        * subscription has tables.
+        */
+       if (get_rel_relkind(subrel->srrelid) == RELKIND_RELATION ||
+           get_rel_relkind(subrel->srrelid) == RELKIND_PARTITIONED_TABLE)
+       {
+           has_subrels = true;
+           break;
+       }

How about storing the relkind to a variable here and avoiding calling
get_rel_relkind() twice (to save one syscache lookup)?

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#409Chao Li
li.evan.chao@gmail.com
In reply to: Chao Li (#405)
Re: Logical Replication of sequences

On Oct 17, 2025, at 17:34, Chao Li <li.evan.chao@gmail.com> wrote:

I may find some time to review 0002 and 0003 next week.

I just reviewed 0002 and 0003. Got some comments for 0002, and no comment for 0003.

1 - 0002 - commit comment
```
Sequences have 3 states:
- INIT (needs [re]synchronizing)
- READY (is already synchronized)
```

3 states or 2 states? Missing DATASYNC?

2 - 0002 - launcher.c
```
+ * For both apply workers and sequence sync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables and sequences. For table
+ * sync workers, the relid should be set to the OID of the relation being
+ * synchronized.
```

Nit: “both” sounds not necessarily.

3 - 0002 - launcher.c
```
  * Stop the logical replication worker for subid/relid, if any.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
```

Should the comment be updated? “subid/relid/wtype"

4 - 0002 - launcher.c
```
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY, true);
```

Based the comment you added to “logicalrep_worker_find()”, for apply worker, relid should be set to InvalidOid. (See comment 2)

Then if you change this function to only work for WORKERTYPE_APPLY, relid should be hard code to InvalidOid, so that “relid” can be removed from parameter list of logicalrep_worker_wakeup().

5 - 0002 - launcher.c
```
/*
* report_error_sequences
*
* Reports discrepancies in sequence data between the publisher and subscriber.
```

Nit: “Reports” -> “Report”

Here “reports” also work. But looking at the previous function’s comment:

```
* Handle sequence synchronization cooperation from the apply worker.
*
* Start a sequencesync worker if one is not already running. The active
```

You used “handle” and “start” but “handles” and “starts”.

“Report” better matches imperative style in PG comments.

6 - 0002 - sequencesync.c
```
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
```

By the end of this function, I think we need to destroyStringInfo() these two strings.

7 - 0002 - sequencesync.c
```
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", “);
```

Why this “if” check is needed?

8 - 0002 - sequencesync.c
```
+ destroyStringInfo(seqstr);
+ destroyStringInfo(cmd);
```

Instead of making and destroying seqstr and cmd in every iteration, we can create them before “while”, and only resetStringInfo() them for every iteration, which should be cheaper and faster.

9 - 0002 - sequencesync.c
```
+	return h1 ^ h2;
+	/* XOR combine */
+}
```

This code is very descriptive, so the commend looks redundant. Also, why put the comment below the code line?

10 - 0002 - syncutils.c
```
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
```
has_pending_sequences is not explained in the function comment.

11 - 0002 - syncutils.c
```
/*
* has_subtables and has_subsequences is declared as static, since the
* same value can be used until the system table is invalidated.
*/
static bool has_subtables = false;
static bool has_subsequences_non_ready = false;
```

The comment says “has_subsequences”, but the real var name is “has_subsequences_non_ready".

12 - 0002 - syncutils.c
```
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
```

I searched over the code, this function has 3 callers, none of them want results for both table and sequence, so that a caller, for example:

```
bool
HasSubscriptionTablesCached(void)
{
bool started_tx;
bool has_subrels;
bool has_pending_sequences;

/* We need up-to-date subscription tables info here */
has_subrels = FetchRelationStates(&has_pending_sequences, &started_tx);

if (started_tx)
{
CommitTransactionCommand();
pgstat_report_stat(true);
}

return has_subrels;
}
```

Looks confused because it defines has_pending_sequences but not using it at all.

So, I think FetchRelationStates() can be refactored to return a result for either table or sequence, because on a type parameter.

Best regards,
--
Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/

#410Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#404)
3 attachment(s)
RE: Logical Replication of sequences

On Friday, October 17, 2025 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 16, 2025 at 4:53 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>
wrote:

Thanks! Here is the remaining patches, which addressed all pending

comments.

Few comments on 0001/0003:

Thanks for the comments.

========================
1.
@@ -480,7 +480,9 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
* leave tablesync slots or origins in the system when the
* corresponding table is dropped.
*/
- if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+ if (!OidIsValid(subid) &&
+ get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+ subrel->srsubstate != SUBREL_STATE_READY)
{

Here, why don't we allow sequence rel to be removed? Please add some
comments.

I think the intention is to allow removing sequence because we do not
create slot/origin when syncing sequence. I added some comments
for the same.

Other comments have been addressed in latest version.

Here is the latest patch set which addressed Shveta[1]/messages/by-id/CAJpy0uC898ga+Qo3X=k_MaRUL7EnmXt+ppDJo-nroQZifrk5Hw@mail.gmail.com, Amit[2]/messages/by-id/CAA4eK1K6Aofz_f6afuL+r2M3GfHBEYQ6-5JO93ph9xZAmYugSA@mail.gmail.com, Chao[3]/messages/by-id/B0F583F9-6D4D-4C0F-9F35-64D5AB2F1643@gmail.com[4]/messages/by-id/598FC353-8E9A-4857-A125-740BE24DCBEB@gmail.com,
Dilip[5]/messages/by-id/CAFiTN-sC4yE_u7fa+UQS38JvhxN_VYSWTq2O_2bTRxxq=eW8FQ@mail.gmail.com, Sawada-San's[6]/messages/by-id/CAD21AoBYP1a4++ALRPG=SmCoyGeGCcqhFxWSXYhy1cvmN0i3CA@mail.gmail.com comments.

Some of the code refactoring comments in [4]/messages/by-id/598FC353-8E9A-4857-A125-740BE24DCBEB@gmail.com will be considered in
next versions.

[1]: /messages/by-id/CAJpy0uC898ga+Qo3X=k_MaRUL7EnmXt+ppDJo-nroQZifrk5Hw@mail.gmail.com
[2]: /messages/by-id/CAA4eK1K6Aofz_f6afuL+r2M3GfHBEYQ6-5JO93ph9xZAmYugSA@mail.gmail.com
[3]: /messages/by-id/B0F583F9-6D4D-4C0F-9F35-64D5AB2F1643@gmail.com
[4]: /messages/by-id/598FC353-8E9A-4857-A125-740BE24DCBEB@gmail.com
[5]: /messages/by-id/CAFiTN-sC4yE_u7fa+UQS38JvhxN_VYSWTq2O_2bTRxxq=eW8FQ@mail.gmail.com
[6]: /messages/by-id/CAD21AoBYP1a4++ALRPG=SmCoyGeGCcqhFxWSXYhy1cvmN0i3CA@mail.gmail.com

Best Regards,
Hou zj

Attachments:

v20251020-0003-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20251020-0003-Documentation-for-sequence-synchronization.patchDownload
From 036f507dec2043ac5c0567d4e8fdf40492b09aea Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Mon, 20 Oct 2025 15:41:14 +0800
Subject: [PATCH v20251020 3/3] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 239 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  24 +++
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 309 insertions(+), 29 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a2a8b49fdb..9d54f8b26ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..548aab31960 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d5f0fb7ba7c..0b2402b6ea6 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2192,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index b0bd4a7cf5d..5362ab57a0e 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -242,6 +242,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.51.0.windows.1

v20251020-0001-Introduce-REFRESH-SEQUENCES-for-subscripti.patchapplication/octet-stream; name=v20251020-0001-Introduce-REFRESH-SEQUENCES-for-subscripti.patchDownload
From 6e5e1c4d150a50e90a7e1c14258a785c66e69656 Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Wed, 15 Oct 2025 16:57:15 +0800
Subject: [PATCH v20251020 1/2] Introduce "REFRESH SEQUENCES" for subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH SEQUENCES
This command updates the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                  |  29 +-
 doc/src/sgml/ref/alter_subscription.sgml    |  58 ++-
 src/backend/catalog/pg_subscription.c       |  53 ++-
 src/backend/commands/subscriptioncmds.c     | 413 ++++++++++++++------
 src/backend/executor/execReplication.c      |  27 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/proto.c     |   3 +
 src/backend/replication/logical/relation.c  |  11 +
 src/backend/replication/logical/syncutils.c |   3 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/logical/worker.c    |   2 +
 src/backend/replication/pgoutput/pgoutput.c |   6 +-
 src/bin/psql/tab-complete.in.c              |  10 +-
 src/include/catalog/pg_subscription_rel.h   |   3 +-
 src/include/executor/executor.h             |   4 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/subscription/meson.build           |   1 +
 src/test/subscription/t/036_sequences.pl    |  57 +++
 src/tools/pgindent/typedefs.list            |   1 +
 19 files changed, 525 insertions(+), 168 deletions(-)
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9b3aae8603b..6c8a0f173c9 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8199,16 +8199,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8242,7 +8245,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8251,12 +8254,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..b0bd4a7cf5d 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,47 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table and sequence information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +218,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..dffebb521f3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -284,7 +284,7 @@ AddSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u already exists",
+		elog(ERROR, "subscription relation %u in subscription %u already exists",
 			 relid, subid);
 
 	/* Form the tuple. */
@@ -478,9 +478,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * synchronization is in progress unless the caller updates the
 		 * corresponding subscription as well. This is to ensure that we don't
 		 * leave tablesync slots or origins in the system when the
-		 * corresponding table is dropped.
+		 * corresponding table is dropped. For sequences, however, it's ok to
+		 * drop them since no separate slots or origins are created during
+		 * synchronization.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +521,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subtables = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,14 +534,27 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		if (relkind == RELKIND_RELATION ||
+			relkind == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subtables = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
 	table_close(rel, AccessShareLock);
 
-	return has_subrels;
+	return has_subtables;
 }
 
 /*
@@ -547,7 +565,8 @@ HasSubscriptionTables(Oid subid)
  * returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +575,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +600,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
+			&& !get_tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 0f54686b699..e1eebe11658 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -106,12 +106,18 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+typedef struct PublicationRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} PublicationRelKind;
+
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
 									  Oid *subrel_local_oids, int subrel_count,
-									  char *subname);
+									  char *subname, bool only_sequences);
 static void check_pub_dead_tuple_retention(WalReceiverConn *wrconn);
 static void check_duplicates_in_publist(List *publist, Datum *datums);
 static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
@@ -736,20 +742,27 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * A replication origin is currently created for all subscriptions,
+	 * including those that only contain sequences or are otherwise empty.
+	 *
+	 * XXX: While this is technically unnecessary, optimizing it would require
+	 * additional logic to skip origin creation during DDL operations and
+	 * apply workers initilization, and to handle origin creation dynamically
+	 * when tables are added to the subscription. It is not clear whether
+	 * preventing creation of origins is worth additional complexity.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
 	/*
 	 * Connect to remote side to execute requested commands and fetch table
-	 * info.
+	 * and sequence info.
 	 */
 	if (opts.connect)
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,10 +777,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *pubrels;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
-									  NULL, 0, stmt->subname);
+									  NULL, 0, stmt->subname, false);
 
 			if (opts.retaindeadtuples)
 				check_pub_dead_tuple_retention(wrconn);
@@ -776,25 +793,28 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			pubrels = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				RangeVar   *rv = pubrelinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
+				CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 										 rv->schemaname, rv->relname);
-
-				AddSubscriptionRelState(subid, relid, table_state,
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +822,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * Similar to origins, it is not clear whether preventing the slot
+			 * creation for empty and sequence-only subscriptions is worth
+			 * additional complexity.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +849,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +903,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +916,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +939,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -940,33 +964,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 		check_publications_origin(wrconn, sub->publications, copy_data,
 								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
+								  subrel_local_oids, subrel_count, sub->name,
+								  false);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known relations. If the relation is not known locally create a new
+		 * state for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = pubrelinfo->rv;
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+			relkind = get_rel_relkind(relid);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
+			CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 									 rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
@@ -978,28 +1000,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1044,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * XXX Currently there is no sequencesync worker, so we only
+				 * stop tablesync workers.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1101,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,7 +1118,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
@@ -1097,6 +1134,58 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with INIT state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+	char	   *err = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("subscription \"%s\" could not connect to the publisher: %s",
+					   sub->name, err));
+
+	PG_TRY();
+	{
+		check_publications_origin(wrconn, sub->publications, false,
+								  sub->retaindeadtuples, sub->origin, NULL, 0,
+								  sub->name, true);
+
+		/* Get local sequence list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+		foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+		{
+			Oid			relid = subrel->relid;
+
+			UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+									   InvalidXLogRecPtr, false);
+			ereport(DEBUG1,
+					errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+									get_namespace_name(get_rel_namespace(relid)),
+									get_rel_name(relid),
+									sub->name));
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1733,6 +1822,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("%s is not allowed for disabled subscriptions",
+								   "ALTER SUBSCRIPTION ... REFRESH SEQUENCES"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1826,7 +1928,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 			check_publications_origin(wrconn, sub->publications, false,
 									  retain_dead_tuples, origin, NULL, 0,
-									  sub->name);
+									  sub->name, false);
 
 			if (update_failover || update_two_phase)
 				walrcv_alter_slot(wrconn, sub->slotname,
@@ -2008,7 +2110,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2318,17 +2420,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
 }
 
 /*
- * Check and log a warning if the publisher has subscribed to the same table,
- * its partition ancestors (if it's a partition), or its partition children (if
- * it's a partitioned table), from some other publishers. This check is
- * required in the following scenarios:
+ * Check and log a warning if the publisher has subscribed to the same relation
+ * (table or sequence), its partition ancestors (if it's a partition), or its
+ * partition children (if it's a partitioned table), from some other publishers.
+ * This check is required in the following scenarios:
  *
  * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
- *    - This check is skipped for tables already added, as incremental sync via
- *      WAL allows origin tracking. The list of such tables is in
- *      subrel_local_oids.
+ *    - This check is skipped for tables and sequences already added, as
+ *      incremental sync via WAL allows origin tracking. The list of such tables
+ *      is in subrel_local_oids.
  *
  * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "retain_dead_tuples = true" and "origin = any", and for
@@ -2338,13 +2440,19 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
  *      details for reliable conflict detection.
+ *    - This check targets for tables only.
  *    - See comments atop worker.c for more details.
+ *
+ * 3) For ALTER SUBSCRIPTION ... REFRESH SEQUENCE statements with "origin =
+ *    none":
+ *    - Warn the user that sequence data from another origin might have been
+ *      copied.
  */
 static void
 check_publications_origin(WalReceiverConn *wrconn, List *publications,
 						  bool copydata, bool retain_dead_tuples,
 						  char *origin, Oid *subrel_local_oids,
-						  int subrel_count, char *subname)
+						  int subrel_count, char *subname, bool only_sequences)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
@@ -2353,9 +2461,10 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	List	   *publist = NIL;
 	int			i;
 	bool		check_rdt;
-	bool		check_table_sync;
+	bool		check_sync;
 	bool		origin_none = origin &&
 		pg_strcasecmp(origin, LOGICALREP_ORIGIN_NONE) == 0;
+	const char *query;
 
 	/*
 	 * Enable retain_dead_tuples checks only when origin is set to 'any',
@@ -2365,28 +2474,42 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	check_rdt = retain_dead_tuples && !origin_none;
 
 	/*
-	 * Enable table synchronization checks only when origin is 'none', to
-	 * ensure that data from other origins is not inadvertently copied.
+	 * Enable table and sequence synchronization checks only when origin is
+	 * 'none', to ensure that data from other origins is not inadvertently
+	 * copied.
 	 */
-	check_table_sync = copydata && origin_none;
+	check_sync = copydata && origin_none;
 
-	/* retain_dead_tuples and table sync checks occur separately */
-	Assert(!(check_rdt && check_table_sync));
+	/* retain_dead_tuples and data synchronization checks occur separately */
+	Assert(!(check_rdt && check_sync));
 
 	/* Return if no checks are required */
-	if (!check_rdt && !check_table_sync)
+	if (!check_rdt && !check_sync)
 		return;
 
 	initStringInfo(&cmd);
-	appendStringInfoString(&cmd,
-						   "SELECT DISTINCT P.pubname AS pubname\n"
-						   "FROM pg_publication P,\n"
-						   "     LATERAL pg_get_publication_tables(P.pubname) GPT\n"
-						   "     JOIN pg_subscription_rel PS ON (GPT.relid = PS.srrelid OR"
-						   "     GPT.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
-						   "                   SELECT relid FROM pg_partition_tree(PS.srrelid))),\n"
-						   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
-						   "WHERE C.oid = GPT.relid AND P.pubname IN (");
+
+	query = "SELECT DISTINCT P.pubname AS pubname\n"
+		"FROM pg_publication P,\n"
+		"     LATERAL %s GPR\n"
+		"     JOIN pg_subscription_rel PS ON (GPR.relid = PS.srrelid OR"
+		"     (GPR.istable AND"
+		"      GPR.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
+		"                    SELECT relid FROM pg_partition_tree(PS.srrelid)))),\n"
+		"     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
+		"WHERE C.oid = GPR.relid AND P.pubname IN (";
+
+	if (walrcv_server_version(wrconn) < 190000 || check_rdt)
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, TRUE as istable FROM pg_get_publication_tables(P.pubname))");
+	else if (only_sequences)
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, FALSE as istable FROM pg_get_publication_sequences(P.pubname))");
+	else
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, TRUE as istable FROM pg_get_publication_tables(P.pubname) UNION ALL"
+						 " SELECT relid, FALSE as istable FROM pg_get_publication_sequences(P.pubname))");
+
 	GetPublicationsStr(publications, &cmd, true);
 	appendStringInfoString(&cmd, ")\n");
 
@@ -2399,7 +2522,7 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	 * existing tables may now include changes from other origins due to newly
 	 * created subscriptions on the publisher.
 	 */
-	if (check_table_sync)
+	if (check_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
@@ -2418,10 +2541,10 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	if (res->status != WALRCV_OK_TUPLES)
 		ereport(ERROR,
 				(errcode(ERRCODE_CONNECTION_FAILURE),
-				 errmsg("could not receive list of replicated tables from the publisher: %s",
+				 errmsg("could not receive list of replicated relations from the publisher: %s",
 						res->err)));
 
-	/* Process tables. */
+	/* Process relations. */
 	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
 	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
 	{
@@ -2436,7 +2559,7 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	}
 
 	/*
-	 * Log a warning if the publisher has subscribed to the same table from
+	 * Log a warning if the publisher has subscribed to the same relation from
 	 * some other publisher. We cannot know the origin of data during the
 	 * initial sync. Data origins can be found only from the WAL by looking at
 	 * the origin id.
@@ -2455,11 +2578,11 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		/* Prepare the list of publication(s) for warning message. */
 		GetPublicationsStr(publist, pubnames, false);
 
-		if (check_table_sync)
+		if (check_sync || only_sequences)
 		{
 			appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
 							 subname);
-			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher tables did not come from other origins."));
+			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher relations did not come from other origins."));
 		}
 		else
 		{
@@ -2471,8 +2594,8 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		ereport(WARNING,
 				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 				errmsg_internal("%s", err_msg->data),
-				errdetail_plural("The subscription subscribes to a publication (%s) that contains tables that are written to by other subscriptions.",
-								 "The subscription subscribes to publications (%s) that contain tables that are written to by other subscriptions.",
+				errdetail_plural("The subscription subscribes to a publication (%s) that contains relations that are written to by other subscriptions.",
+								 "The subscription subscribes to publications (%s) that contain relations that are written to by other subscriptions.",
 								 list_length(publist), pubnames->data),
 				errhint_internal("%s", err_hint->data));
 	}
@@ -2594,8 +2717,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(PublicationRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2603,15 +2741,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		support_relkind_seq = (server_version >= 190000);
+	int			column_count = check_columnlist ? (support_relkind_seq ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2619,7 +2759,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2637,14 +2777,27 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
-						 "       FROM pg_class c\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs");
+
+		if (support_relkind_seq)
+			appendStringInfo(&cmd, ", c.relkind\n");
+
+		appendStringInfo(&cmd, "   FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
 						 "                FROM pg_publication\n"
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (support_relkind_seq)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2662,7 +2815,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2678,22 +2831,32 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char		relkind = RELKIND_RELATION;
+		PublicationRelKind *relinfo = palloc_object(PublicationRelKind);
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (support_relkind_seq)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE &&
+			check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2701,7 +2864,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..3f61714ea7f 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1112,18 +1112,35 @@ CheckCmdReplicaIdentity(Relation rel, CmdType cmd)
 
 
 /*
- * Check if we support writing into specific relkind.
+ * Check if we support writing into specific relkind of local relation and check
+ * if it aligns with the relkind of the relation on the publisher.
  *
  * The nspname and relname are only needed for error reporting.
  */
 void
-CheckSubscriptionRelkind(char relkind, const char *nspname,
-						 const char *relname)
+CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+						 const char *nspname, const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (localrelkind != RELKIND_RELATION &&
+		localrelkind != RELKIND_PARTITIONED_TABLE &&
+		localrelkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
 						nspname, relname),
-				 errdetail_relkind_not_supported(relkind)));
+				 errdetail_relkind_not_supported(localrelkind)));
+
+	/*
+	 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be treated
+	 * interchangeably, but ensure that sequences (RELKIND_SEQUENCE) match
+	 * exactly on both publisher and subscriber.
+	 */
+	if ((localrelkind == RELKIND_SEQUENCE && remoterelkind != RELKIND_SEQUENCE) ||
+		(localrelkind != RELKIND_SEQUENCE && remoterelkind == RELKIND_SEQUENCE))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("relation \"%s.%s\" type mismatch: source \"%s\", target \"%s\"",
+					   nspname, relname,
+					   remoterelkind == RELKIND_SEQUENCE ? "sequence" : "table",
+					   localrelkind == RELKIND_SEQUENCE ? "sequence" : "table"));
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dc0c2886674..a4b29c822e8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10992,6 +10992,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 2436a263dc2..aa8409e0711 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -708,6 +708,9 @@ logicalrep_read_rel(StringInfo in)
 	/* Read the replica identity. */
 	rel->replident = pq_getmsgbyte(in);
 
+	/* rekind is not sent */
+	rel->relkind = 0;
+
 	/* Get attribute description */
 	logicalrep_read_attrs(in, rel);
 
diff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c
index f59046ad620..0f106e83c79 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -196,6 +196,16 @@ logicalrep_relmap_update(LogicalRepRelation *remoterel)
 		entry->remoterel.atttyps[i] = remoterel->atttyps[i];
 	}
 	entry->remoterel.replident = remoterel->replident;
+
+	/*
+	 * XXX The walsender currently does not transmit the relkind of the remote
+	 * relation when replicating changes. Since we support replicating only
+	 * table changes at present, we default to initializing relkind as
+	 * RELKIND_RELATION.
+	 */
+	entry->remoterel.relkind = remoterel->relkind
+		? remoterel->relkind : RELKIND_RELATION;
+
 	entry->remoterel.attkeys = bms_copy(remoterel->attkeys);
 	MemoryContextSwitchTo(oldctx);
 }
@@ -425,6 +435,7 @@ logicalrep_rel_open(LogicalRepRelId remoteid, LOCKMODE lockmode)
 
 		/* Check for supported relkind. */
 		CheckSubscriptionRelkind(entry->localrel->rd_rel->relkind,
+								 remoterel->relkind,
 								 remoterel->nspname, remoterel->relname);
 
 		/*
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 1bb3ca01db0..510b9e9c50e 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -150,7 +150,8 @@ FetchRelationStates(bool *started_tx)
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2ba12517e93..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 3c58ad88476..d986ba2ea50 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -3367,6 +3367,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 	 * at CREATE/ALTER SUBSCRIPTION would be insufficient.
 	 */
 	CheckSubscriptionRelkind(partrel->rd_rel->relkind,
+							 relmapentry->remoterel.relkind,
 							 get_namespace_name(RelationGetNamespace(partrel)),
 							 RelationGetRelationName(partrel));
 
@@ -3563,6 +3564,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 
 					/* Check that new partition also has supported relkind. */
 					CheckSubscriptionRelkind(partrel_new->rd_rel->relkind,
+											 relmapentry->remoterel.relkind,
 											 get_namespace_name(RelationGetNamespace(partrel_new)),
 											 RelationGetRelationName(partrel_new));
 
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 847806b0a2e..05cc7512520 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1137,9 +1137,9 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
-	 * need to check all the given publication-table mappings and report an
-	 * error if any publications have a different column list.
+	 * fetch_relation_list. But one can later change the publication so we
+	 * still need to check all the given publication-table mappings and report
+	 * an error if any publications have a different column list.
 	 */
 	foreach(lc, publications)
 	{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ad37f9f6ed0..fa08059671b 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2319,11 +2319,11 @@ match_previous_words(int pattern_id,
 	/* ALTER SUBSCRIPTION <name> */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny))
 		COMPLETE_WITH("CONNECTION", "ENABLE", "DISABLE", "OWNER TO",
-					  "RENAME TO", "REFRESH PUBLICATION", "SET", "SKIP (",
-					  "ADD PUBLICATION", "DROP PUBLICATION");
-	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
-	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+					  "RENAME TO", "REFRESH PUBLICATION", "REFRESH SEQUENCES",
+					  "SET", "SKIP (", "ADD PUBLICATION", "DROP PUBLICATION");
+	/* ALTER SUBSCRIPTION <name> REFRESH */
+	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH"))
+		COMPLETE_WITH("PUBLICATION", "SEQUENCES");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..49deec052c6 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,7 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 3248e78cd28..0ba86c2ad72 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -784,8 +784,8 @@ extern void ExecSimpleRelationDelete(ResultRelInfo *resultRelInfo,
 									 TupleTableSlot *searchslot);
 extern void CheckCmdReplicaIdentity(Relation rel, CmdType cmd);
 
-extern void CheckSubscriptionRelkind(char relkind, const char *nspname,
-									 const char *relname);
+extern void CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+									 const char *nspname, const char *relname);
 
 /*
  * prototypes from functions in nodeModifyTable.c
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4e445fe0cd7..ecbddd12e1b 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4362,6 +4362,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..0b300f12228 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl'
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b92c39afa93
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,57 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Confirm sequences can be listed in pg_subscription_rel
+my $result = $node_subscriber->safe_psql(
+	'postgres',
+	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+);
+is ($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 377a7946585..bdf76d0324f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2363,6 +2363,7 @@ PublicationObjSpec
 PublicationObjSpecType
 PublicationPartOpt
 PublicationRelInfo
+PublicationRelKind
 PublicationSchemaInfo
 PublicationTable
 PublishGencolsType
-- 
2.31.1

v20251020-0002-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20251020-0002-New-worker-for-sequence-synchronization-du.patchDownload
From 4ee505a128e57bb99adb9fd2a6cab5d6d38a2f19 Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Mon, 20 Oct 2025 15:39:19 +0800
Subject: [PATCH v20251020 2/2] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  23 +-
 src/backend/commands/subscriptioncmds.c       |   8 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  69 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 757 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 108 ++-
 src/backend/replication/logical/tablesync.c   |  85 +-
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   8 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 191 ++++-
 src/tools/pgindent/typedefs.list              |   2 +
 25 files changed, 1344 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index dffebb521f3..9d01df579aa 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..1f3ef004aa3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf46a543364..067c6c68ee8 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -955,8 +954,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1057,7 +1056,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1064,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1080,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1799,7 +1798,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e1eebe11658..765e00f6dfa 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1047,8 +1047,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				RemoveSubscriptionRel(sub->oid, relid);
 
 				/*
-				 * XXX Currently there is no sequencesync worker, so we only
-				 * stop tablesync workers.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
 				if (relkind != RELKIND_SEQUENCE)
 				{
@@ -1059,7 +1059,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2087,7 +2087,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..a38a89509d0 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,23 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
+ * subscription id, relid and type.
  *
- * We are only interested in the leader apply worker or table sync worker.
+ * For apply workers and sequence sync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables and sequences. For table
+ * sync workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +272,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +333,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +422,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +512,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -628,15 +642,18 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
 
 /*
  * Stop the logical replication worker for subid/relid, if any.
+ *
+ * Similar to logicalrep_worker_find, relid should be set to a valid OID only
+ * for table sync workers.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +720,10 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid,
+									OidIsValid(relid)
+									? WORKERTYPE_TABLESYNC
+									: WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +855,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +922,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1299,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1636,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1676,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..d3e1cd057d2
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,757 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state DATASYNC, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT or DATASYNC state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT / DATASYNC → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(&has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Report discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+	}
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 510b9e9c50e..45ab805117b 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -46,8 +47,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,14 +65,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -85,7 +100,48 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -107,6 +163,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -116,18 +178,26 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready is declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
+	*has_pending_sequences = false;
 	*started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
@@ -138,6 +208,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -150,7 +221,7 @@ FetchRelationStates(bool *started_tx)
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
@@ -159,7 +230,12 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -184,5 +260,7 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..8543d6c279d 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -375,11 +375,12 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	bool		started_tx = false;
 	bool		should_exit = false;
 	Relation	rel = NULL;
+	bool		has_pending_sequences;
 
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -413,6 +414,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +435,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +481,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +553,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1253,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1528,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1574,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1582,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1616,10 +1597,11 @@ bool
 AllTablesyncsReady(void)
 {
 	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
+	bool		has_pending_sequences;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_tables = FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	if (started_tx)
 	{
@@ -1631,7 +1613,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1647,9 +1629,10 @@ HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
+	bool		has_pending_sequences;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_subrels = FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d986ba2ea50..bf59778b42a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4134,7 +4157,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5577,7 +5603,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5697,8 +5724,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5809,6 +5836,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5828,14 +5859,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5906,6 +5939,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5918,9 +5955,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..4da7298502e 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b51d2b17379..8a2e1d1158a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 49deec052c6..88772a22b80 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,22 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..bcea652ef61 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..2db16bd7f84 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index ae352f6e691..a7c6588999f 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..c7bcc922ae8 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index b92c39afa93..e4a454598b0 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -29,7 +29,15 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -47,11 +55,184 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
 my $result = $node_subscriber->safe_psql(
-	'postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH SEQUENCES should cause sync of new sequences
+# of the publisher, and changes to existing sequences should also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
 );
-is ($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index bdf76d0324f..c8599aeb6b7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1629,6 +1629,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.31.1

#411Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#407)
RE: Logical Replication of sequences

On Saturday, October 18, 2025 3:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Fri, Oct 17, 2025 at 1:35 AM shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Oct 17, 2025 at 10:01 AM Amit Kapila <amit.kapila16@gmail.com>

wrote:

On Thu, Oct 16, 2025 at 4:53 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

Regarding whether we can avoid creating slot/origin for seq-only

publication.

I think the main challenge lies in ensuring the apply worker
operates smoothly without a replication slot. Currently, the apply
worker uses the START_REPLICATION command with a replication slot
to acquire the slot on the publisher. To bypass this, it's
essential to skip starting the replication and specifically, avoid entering

the LogicalRepApplyLoop().

To address this, I thought to implement a separate loop dedicated
to sequence-only subscriptions. Within this loop, the apply worker
would only call functions like ProcessSyncingSequencesForApply()
to manage sequence synchronization while periodically checking for
any new tables added to the subscription. If new tables are
detected, the apply worker would exit this loop and enter the

LogicalRepApplyLoop().

I chose not to consider allowing the START_REPLICATION command to
operate without a logical slot, as it seems like an unconventional
approach requiring modifications in walsender and to skip logical

decoding and related processes.

Another consideration is whether to address scenarios where tables
are subsequently removed from the subscription, given that slots
and origins would already have been created in such cases.

Since it might introduce addition complexity to the patches, and
considering that we already allow slot/origin to be created for
empty subscription, it might also be acceptable to allow it to be
created for sequence-only subscription. So, I chose to add some

comments to explain the reason for it in latest version.

Origin case might be slightly easier to handle, but it could also
require some amount of implementations. Since origin is less
harmful than a replication slot and maintaining it does not have
noticeable overhead, it seems OK to me to retain the current
behaviour and add some comments in the patch to clarify the same.

I agree that avoiding to create a slot/origin for sequence-only
subscription is not worth the additional complexity at other places,
especially when we do create them for empty subscriptions.

+1.

While testeing 001 patch alone, I found that for sequence-only
subscription, we get error in tablesync worker :
ERROR: relation "public.seq1" type mismatch: source "table", target

"sequence"

This error comes because during copy_table(),
logicalrep_relmap_update() does not update relkind and thus later
CheckSubscriptionRelkind() ends up giving the above error.

Fixed in latest version.

I faced the same error while reviewing the 0001 patch. I think if we're going to
push these patches separately the 0001 patch should have at least minimal
regression tests. Otherwise, I'm concerned that buildfarm animals won't
complain but we could end up blocking other logical replication developments.

I moved some test from 0002 to 0001. Thanks Kuroda-San for contributing
codes for this change.

Best Regards,
Hou zj

#412Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Chao Li (#409)
RE: Logical Replication of sequences

On Monday, October 20, 2025 10:53 AM Chao Li <li.evan.chao@gmail.com> wrote:

On Oct 17, 2025, at 17:34, Chao Li <li.evan.chao@gmail.com> wrote:

I may find some time to review 0002 and 0003 next week.

Thanks for the comments.

2 - 0001  - pg_subscription.c
```
@@ -542,12 +560,21 @@ HasSubscriptionTables(Oid subid)
+ * get_tables: get relations for tables of the subscription.
+ *
+ * get_sequences: get relations for sequences of the subscription.
+ *
+ * not_ready:
+ * If getting tables and not_ready is false, retrieve all tables;
+ * otherwise, retrieve only tables that have not reached the READY state.
+ * If getting sequences and not_ready is false, retrieve all sequences;
+ * otherwise, retrieve only sequences that have not reached the READY state.
+ *
```

This function comment sounds a bit verbose and repetitive. Suggested
revision:
```
* get_tables: if true, include tables in the returned list.
* get_sequences: if true, include sequences in the returned list.
* not_ready: if true, include only objects that have not reached the READY
state;
* if false, include all objects of the requested type(s).
```

I have reverted this change according to Amit's suggestion because
existing comments is good enough.

3 - 0001 - subscriptioncmds.c
```
+			 * Currently, a replication slot is created for all
subscriptions,
+			 * including those for empty or sequence-only
publications. While
+			 * this is unnecessary, optimizing this behavior would
```

Minor tweaks:
* "optimizing this behavior” -> “optimizing it”
* “doing anything about it” -> “addressing it"

I have re-rewritten this part as well based on some off-list
discussions.

4 - 0001 - subscriptioncmds.c
```
* 3) For ALTER SUBSCRIPTION ... REFRESH SEQUENCE statements with
"copy_data =
* true" and "origin = none":
* - Warn the user that sequence data from another origin might have been
* copied.
```

“Warn the user” -> “Warn users"

I prefer to keep current word as that's the existing style used atop
of check_publications_origin().

I just reviewed 0002 and 0003. Got some comments for 0002, and no comment
for 0003.

4 - 0002 - launcher.c
```
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, WORKERTYPE_APPLY,
true);
```

Based the comment you added to “logicalrep_worker_find()”, for apply worker,
relid should be set to InvalidOid. (See comment 2)

Then if you change this function to only work for WORKERTYPE_APPLY, relid
should be hard code to InvalidOid, so that “relid” can be removed from
parameter list of logicalrep_worker_wakeup().

I chose to pass different worker type based on the passed OID in this version.
If we decided to change the function interface to allow only apply worker, then
we can change in later version.

6 - 0002 - sequencesync.c
```
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo
mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
```

By the end of this function, I think we need to destroyStringInfo() these two
strings.

I think we do not need to free these strings as the worker will stop soon after
executing this function. Similarly, some other string/hash table memory free
in copy_sequences() and LogicalRepSyncSequences() also look unnecessary to me,
and I will consider removing them in later version.

7 - 0002 - sequencesync.c
```
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char
*seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", “);
```

Why this “if” check is needed?

I think we want to append comma only when one sequence name
has already been appended to the buf.

10 - 0002 - syncutils.c
```
/*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and
sequences.
*
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences.
Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
*
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
*/
bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
```
has_pending_sequences is not explained in the function comment.

11 - 0002 - syncutils.c
```
/*
* has_subtables and has_subsequences is declared as static, since the
* same value can be used until the system table is invalidated.
*/
static bool has_subtables = false;
static bool has_subsequences_non_ready = false;
```

The comment says “has_subsequences”, but the real var name is
“has_subsequences_non_ready".

12 - 0002 - syncutils.c
```
bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
```

I searched over the code, this function has 3 callers, none of them want results
for both table and sequence, so that a caller, for example:

```
bool
HasSubscriptionTablesCached(void)
{
bool started_tx;
bool has_subrels;
bool has_pending_sequences;

/* We need up-to-date subscription tables info here */
has_subrels = FetchRelationStates(&has_pending_sequences, &started_tx);

if (started_tx)
{
CommitTransactionCommand();
pgstat_report_stat(true);
}

return has_subrels;
}
```

Looks confused because it defines has_pending_sequences but not using it at
all.

So, I think FetchRelationStates() can be refactored to return a result for either
table or sequence, because on a type parameter.

I will think more for this function and change in next version.

Best Regards,
Hou zj

#413Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#410)
3 attachment(s)
RE: Logical Replication of sequences

Here is the latest patch set which addressed Shveta[1], Amit[2], Chao[3][4],
Dilip[5], Sawada-San's[6] comments.

I found the patch could not pass the sanity check, because 0001 missed a comma.
Also, there was a duplicated entry for `REFRESH SEQUENCES`.

Attached set fixed the issue.

Best regards,
Hayato Kuroda
FUJITSU LIMITED

Attachments:

v20251020_2-0001-Introduce-REFRESH-SEQUENCES-for-subscrip.patchapplication/octet-stream; name=v20251020_2-0001-Introduce-REFRESH-SEQUENCES-for-subscrip.patchDownload
From b4165fdfcd6e618b9aaa4d3628a376874048b6a3 Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Wed, 15 Oct 2025 16:57:15 +0800
Subject: [PATCH v20251020_2 1/3] Introduce "REFRESH SEQUENCES" for
 subscriptions

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH SEQUENCES
This command updates the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

Additionally, the following subscription commands:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION
have been extended to also refresh sequence mappings. These commands will:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side, improving consistency
and reducing manual maintenance.

Author: Vignesh C, Tomas Vondra
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                  |  29 +-
 doc/src/sgml/ref/alter_subscription.sgml    |  58 ++-
 src/backend/catalog/pg_subscription.c       |  53 ++-
 src/backend/commands/subscriptioncmds.c     | 413 ++++++++++++++------
 src/backend/executor/execReplication.c      |  27 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/proto.c     |   3 +
 src/backend/replication/logical/relation.c  |  11 +
 src/backend/replication/logical/syncutils.c |   3 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/logical/worker.c    |   2 +
 src/backend/replication/pgoutput/pgoutput.c |   6 +-
 src/bin/psql/tab-complete.in.c              |  10 +-
 src/include/catalog/pg_subscription_rel.h   |   3 +-
 src/include/executor/executor.h             |   4 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/subscription/meson.build           |   1 +
 src/test/subscription/t/036_sequences.pl    |  57 +++
 src/tools/pgindent/typedefs.list            |   1 +
 19 files changed, 525 insertions(+), 168 deletions(-)
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9b3aae8603b..6c8a0f173c9 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8199,16 +8199,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8242,7 +8245,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8251,12 +8254,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..b0bd4a7cf5d 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,47 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table and sequence information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+         </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +218,30 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..dffebb521f3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -284,7 +284,7 @@ AddSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u already exists",
+		elog(ERROR, "subscription relation %u in subscription %u already exists",
 			 relid, subid);
 
 	/* Form the tuple. */
@@ -478,9 +478,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * synchronization is in progress unless the caller updates the
 		 * corresponding subscription as well. This is to ensure that we don't
 		 * leave tablesync slots or origins in the system when the
-		 * corresponding table is dropped.
+		 * corresponding table is dropped. For sequences, however, it's ok to
+		 * drop them since no separate slots or origins are created during
+		 * synchronization.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +521,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subtables = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,14 +534,27 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		if (relkind == RELKIND_RELATION ||
+			relkind == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subtables = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
 	table_close(rel, AccessShareLock);
 
-	return has_subrels;
+	return has_subtables;
 }
 
 /*
@@ -547,7 +565,8 @@ HasSubscriptionTables(Oid subid)
  * returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +575,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +600,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
+			&& !get_tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 0f54686b699..e1eebe11658 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -106,12 +106,18 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+typedef struct PublicationRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} PublicationRelKind;
+
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
 static void check_publications_origin(WalReceiverConn *wrconn,
 									  List *publications, bool copydata,
 									  bool retain_dead_tuples, char *origin,
 									  Oid *subrel_local_oids, int subrel_count,
-									  char *subname);
+									  char *subname, bool only_sequences);
 static void check_pub_dead_tuple_retention(WalReceiverConn *wrconn);
 static void check_duplicates_in_publist(List *publist, Datum *datums);
 static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
@@ -736,20 +742,27 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * A replication origin is currently created for all subscriptions,
+	 * including those that only contain sequences or are otherwise empty.
+	 *
+	 * XXX: While this is technically unnecessary, optimizing it would require
+	 * additional logic to skip origin creation during DDL operations and
+	 * apply workers initilization, and to handle origin creation dynamically
+	 * when tables are added to the subscription. It is not clear whether
+	 * preventing creation of origins is worth additional complexity.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
 	/*
 	 * Connect to remote side to execute requested commands and fetch table
-	 * info.
+	 * and sequence info.
 	 */
 	if (opts.connect)
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,10 +777,14 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *pubrels;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
 			check_publications_origin(wrconn, publications, opts.copy_data,
 									  opts.retaindeadtuples, opts.origin,
-									  NULL, 0, stmt->subname);
+									  NULL, 0, stmt->subname, false);
 
 			if (opts.retaindeadtuples)
 				check_pub_dead_tuple_retention(wrconn);
@@ -776,25 +793,28 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			pubrels = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				RangeVar   *rv = pubrelinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
+				CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 										 rv->schemaname, rv->relname);
-
-				AddSubscriptionRelState(subid, relid, table_state,
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +822,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * Similar to origins, it is not clear whether preventing the slot
+			 * creation for empty and sequence-only subscriptions is worth
+			 * additional complexity.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +849,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +903,12 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +916,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,17 +939,17 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
 		off = 0;
@@ -940,33 +964,31 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 		check_publications_origin(wrconn, sub->publications, copy_data,
 								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
+								  subrel_local_oids, subrel_count, sub->name,
+								  false);
 
 		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
-
-		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known relations. If the relation is not known locally create a new
+		 * state for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = pubrelinfo->rv;
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+			relkind = get_rel_relkind(relid);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
+			CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 									 rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
@@ -978,28 +1000,29 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
 		/*
-		 * Next remove state for tables we should not care about anymore using
-		 * the data we collected above
+		 * Next remove state for relations we should not care about anymore
+		 * using the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
 		for (off = 0; off < subrel_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				char		relkind = get_rel_relkind(relid);
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,41 +1044,55 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
-				logicalrep_worker_stop(sub->oid, relid);
-
 				/*
-				 * For READY state, we would have already dropped the
-				 * tablesync origin.
+				 * XXX Currently there is no sequencesync worker, so we only
+				 * stop tablesync workers.
 				 */
-				if (state != SUBREL_STATE_READY)
+				if (relkind != RELKIND_SEQUENCE)
 				{
-					char		originname[NAMEDATALEN];
+					SubRemoveRels *rel = palloc(sizeof(SubRemoveRels));
+
+					rel->relid = relid;
+					rel->state = state;
+
+					sub_remove_rels = lappend(sub_remove_rels, rel);
+
+					logicalrep_worker_stop(sub->oid, relid);
 
 					/*
-					 * Drop the tablesync's origin tracking if exists.
-					 *
-					 * It is possible that the origin is not yet created for
-					 * tablesync worker, this can happen for the states before
-					 * SUBREL_STATE_FINISHEDCOPY. The tablesync worker or
-					 * apply worker can also concurrently try to drop the
-					 * origin and by this time the origin might be already
-					 * removed. For these reasons, passing missing_ok = true.
+					 * For READY state, we would have already dropped the
+					 * tablesync origin.
 					 */
-					ReplicationOriginNameForLogicalRep(sub->oid, relid, originname,
-													   sizeof(originname));
-					replorigin_drop_by_name(originname, true, false);
+					if (state != SUBREL_STATE_READY)
+					{
+						char		originname[NAMEDATALEN];
+
+						/*
+						 * Drop the tablesync's origin tracking if exists.
+						 *
+						 * It is possible that the origin is not yet created
+						 * for tablesync worker, this can happen for the
+						 * states before SUBREL_STATE_FINISHEDCOPY. The
+						 * tablesync worker or apply worker can also
+						 * concurrently try to drop the origin and by this
+						 * time the origin might be already removed. For these
+						 * reasons, passing missing_ok = true.
+						 */
+						ReplicationOriginNameForLogicalRep(sub->oid, relid,
+														   originname,
+														   sizeof(originname));
+						replorigin_drop_by_name(originname, true, false);
+					}
 				}
 
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
-										 get_namespace_name(get_rel_namespace(relid)),
-										 get_rel_name(relid),
-										 sub->name)));
+						errmsg_internal("%s \"%s.%s\" removed from subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
 			}
 		}
 
@@ -1064,10 +1101,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,7 +1118,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
@@ -1097,6 +1134,58 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with INIT state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+	char	   *err = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("subscription \"%s\" could not connect to the publisher: %s",
+					   sub->name, err));
+
+	PG_TRY();
+	{
+		check_publications_origin(wrconn, sub->publications, false,
+								  sub->retaindeadtuples, sub->origin, NULL, 0,
+								  sub->name, true);
+
+		/* Get local sequence list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+		foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+		{
+			Oid			relid = subrel->relid;
+
+			UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+									   InvalidXLogRecPtr, false);
+			ereport(DEBUG1,
+					errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+									get_namespace_name(get_rel_namespace(relid)),
+									get_rel_name(relid),
+									sub->name));
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1733,6 +1822,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("%s is not allowed for disabled subscriptions",
+								   "ALTER SUBSCRIPTION ... REFRESH SEQUENCES"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1826,7 +1928,7 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 
 			check_publications_origin(wrconn, sub->publications, false,
 									  retain_dead_tuples, origin, NULL, 0,
-									  sub->name);
+									  sub->name, false);
 
 			if (update_failover || update_two_phase)
 				walrcv_alter_slot(wrconn, sub->slotname,
@@ -2008,7 +2110,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2318,17 +2420,17 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
 }
 
 /*
- * Check and log a warning if the publisher has subscribed to the same table,
- * its partition ancestors (if it's a partition), or its partition children (if
- * it's a partitioned table), from some other publishers. This check is
- * required in the following scenarios:
+ * Check and log a warning if the publisher has subscribed to the same relation
+ * (table or sequence), its partition ancestors (if it's a partition), or its
+ * partition children (if it's a partitioned table), from some other publishers.
+ * This check is required in the following scenarios:
  *
  * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
- *    - This check is skipped for tables already added, as incremental sync via
- *      WAL allows origin tracking. The list of such tables is in
- *      subrel_local_oids.
+ *    - This check is skipped for tables and sequences already added, as
+ *      incremental sync via WAL allows origin tracking. The list of such tables
+ *      is in subrel_local_oids.
  *
  * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "retain_dead_tuples = true" and "origin = any", and for
@@ -2338,13 +2440,19 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
  *      details for reliable conflict detection.
+ *    - This check targets for tables only.
  *    - See comments atop worker.c for more details.
+ *
+ * 3) For ALTER SUBSCRIPTION ... REFRESH SEQUENCE statements with "origin =
+ *    none":
+ *    - Warn the user that sequence data from another origin might have been
+ *      copied.
  */
 static void
 check_publications_origin(WalReceiverConn *wrconn, List *publications,
 						  bool copydata, bool retain_dead_tuples,
 						  char *origin, Oid *subrel_local_oids,
-						  int subrel_count, char *subname)
+						  int subrel_count, char *subname, bool only_sequences)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
@@ -2353,9 +2461,10 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	List	   *publist = NIL;
 	int			i;
 	bool		check_rdt;
-	bool		check_table_sync;
+	bool		check_sync;
 	bool		origin_none = origin &&
 		pg_strcasecmp(origin, LOGICALREP_ORIGIN_NONE) == 0;
+	const char *query;
 
 	/*
 	 * Enable retain_dead_tuples checks only when origin is set to 'any',
@@ -2365,28 +2474,42 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	check_rdt = retain_dead_tuples && !origin_none;
 
 	/*
-	 * Enable table synchronization checks only when origin is 'none', to
-	 * ensure that data from other origins is not inadvertently copied.
+	 * Enable table and sequence synchronization checks only when origin is
+	 * 'none', to ensure that data from other origins is not inadvertently
+	 * copied.
 	 */
-	check_table_sync = copydata && origin_none;
+	check_sync = copydata && origin_none;
 
-	/* retain_dead_tuples and table sync checks occur separately */
-	Assert(!(check_rdt && check_table_sync));
+	/* retain_dead_tuples and data synchronization checks occur separately */
+	Assert(!(check_rdt && check_sync));
 
 	/* Return if no checks are required */
-	if (!check_rdt && !check_table_sync)
+	if (!check_rdt && !check_sync)
 		return;
 
 	initStringInfo(&cmd);
-	appendStringInfoString(&cmd,
-						   "SELECT DISTINCT P.pubname AS pubname\n"
-						   "FROM pg_publication P,\n"
-						   "     LATERAL pg_get_publication_tables(P.pubname) GPT\n"
-						   "     JOIN pg_subscription_rel PS ON (GPT.relid = PS.srrelid OR"
-						   "     GPT.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
-						   "                   SELECT relid FROM pg_partition_tree(PS.srrelid))),\n"
-						   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
-						   "WHERE C.oid = GPT.relid AND P.pubname IN (");
+
+	query = "SELECT DISTINCT P.pubname AS pubname\n"
+		"FROM pg_publication P,\n"
+		"     LATERAL %s GPR\n"
+		"     JOIN pg_subscription_rel PS ON (GPR.relid = PS.srrelid OR"
+		"     (GPR.istable AND"
+		"      GPR.relid IN (SELECT relid FROM pg_partition_ancestors(PS.srrelid) UNION"
+		"                    SELECT relid FROM pg_partition_tree(PS.srrelid)))),\n"
+		"     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
+		"WHERE C.oid = GPR.relid AND P.pubname IN (";
+
+	if (walrcv_server_version(wrconn) < 190000 || check_rdt)
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, TRUE as istable FROM pg_get_publication_tables(P.pubname))");
+	else if (only_sequences)
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, FALSE as istable FROM pg_get_publication_sequences(P.pubname))");
+	else
+		appendStringInfo(&cmd, query,
+						 "(SELECT relid, TRUE as istable FROM pg_get_publication_tables(P.pubname) UNION ALL"
+						 " SELECT relid, FALSE as istable FROM pg_get_publication_sequences(P.pubname))");
+
 	GetPublicationsStr(publications, &cmd, true);
 	appendStringInfoString(&cmd, ")\n");
 
@@ -2399,7 +2522,7 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	 * existing tables may now include changes from other origins due to newly
 	 * created subscriptions on the publisher.
 	 */
-	if (check_table_sync)
+	if (check_sync)
 	{
 		for (i = 0; i < subrel_count; i++)
 		{
@@ -2418,10 +2541,10 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	if (res->status != WALRCV_OK_TUPLES)
 		ereport(ERROR,
 				(errcode(ERRCODE_CONNECTION_FAILURE),
-				 errmsg("could not receive list of replicated tables from the publisher: %s",
+				 errmsg("could not receive list of replicated relations from the publisher: %s",
 						res->err)));
 
-	/* Process tables. */
+	/* Process relations. */
 	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
 	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
 	{
@@ -2436,7 +2559,7 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	}
 
 	/*
-	 * Log a warning if the publisher has subscribed to the same table from
+	 * Log a warning if the publisher has subscribed to the same relation from
 	 * some other publisher. We cannot know the origin of data during the
 	 * initial sync. Data origins can be found only from the WAL by looking at
 	 * the origin id.
@@ -2455,11 +2578,11 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		/* Prepare the list of publication(s) for warning message. */
 		GetPublicationsStr(publist, pubnames, false);
 
-		if (check_table_sync)
+		if (check_sync || only_sequences)
 		{
 			appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
 							 subname);
-			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher tables did not come from other origins."));
+			appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher relations did not come from other origins."));
 		}
 		else
 		{
@@ -2471,8 +2594,8 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 		ereport(WARNING,
 				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 				errmsg_internal("%s", err_msg->data),
-				errdetail_plural("The subscription subscribes to a publication (%s) that contains tables that are written to by other subscriptions.",
-								 "The subscription subscribes to publications (%s) that contain tables that are written to by other subscriptions.",
+				errdetail_plural("The subscription subscribes to a publication (%s) that contains relations that are written to by other subscriptions.",
+								 "The subscription subscribes to publications (%s) that contain relations that are written to by other subscriptions.",
 								 list_length(publist), pubnames->data),
 				errhint_internal("%s", err_hint->data));
 	}
@@ -2594,8 +2717,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(PublicationRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2603,15 +2741,17 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		support_relkind_seq = (server_version >= 190000);
+	int			column_count = check_columnlist ? (support_relkind_seq ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2619,7 +2759,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2637,14 +2777,27 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
-						 "       FROM pg_class c\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs");
+
+		if (support_relkind_seq)
+			appendStringInfo(&cmd, ", c.relkind\n");
+
+		appendStringInfo(&cmd, "   FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
 						 "                FROM pg_publication\n"
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (support_relkind_seq)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2662,7 +2815,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2678,22 +2831,32 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char		relkind = RELKIND_RELATION;
+		PublicationRelKind *relinfo = palloc_object(PublicationRelKind);
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (support_relkind_seq)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE &&
+			check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2701,7 +2864,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..3f61714ea7f 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1112,18 +1112,35 @@ CheckCmdReplicaIdentity(Relation rel, CmdType cmd)
 
 
 /*
- * Check if we support writing into specific relkind.
+ * Check if we support writing into specific relkind of local relation and check
+ * if it aligns with the relkind of the relation on the publisher.
  *
  * The nspname and relname are only needed for error reporting.
  */
 void
-CheckSubscriptionRelkind(char relkind, const char *nspname,
-						 const char *relname)
+CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+						 const char *nspname, const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (localrelkind != RELKIND_RELATION &&
+		localrelkind != RELKIND_PARTITIONED_TABLE &&
+		localrelkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
 						nspname, relname),
-				 errdetail_relkind_not_supported(relkind)));
+				 errdetail_relkind_not_supported(localrelkind)));
+
+	/*
+	 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be treated
+	 * interchangeably, but ensure that sequences (RELKIND_SEQUENCE) match
+	 * exactly on both publisher and subscriber.
+	 */
+	if ((localrelkind == RELKIND_SEQUENCE && remoterelkind != RELKIND_SEQUENCE) ||
+		(localrelkind != RELKIND_SEQUENCE && remoterelkind == RELKIND_SEQUENCE))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("relation \"%s.%s\" type mismatch: source \"%s\", target \"%s\"",
+					   nspname, relname,
+					   remoterelkind == RELKIND_SEQUENCE ? "sequence" : "table",
+					   localrelkind == RELKIND_SEQUENCE ? "sequence" : "table"));
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dc0c2886674..a4b29c822e8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10992,6 +10992,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 2436a263dc2..aa8409e0711 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -708,6 +708,9 @@ logicalrep_read_rel(StringInfo in)
 	/* Read the replica identity. */
 	rel->replident = pq_getmsgbyte(in);
 
+	/* rekind is not sent */
+	rel->relkind = 0;
+
 	/* Get attribute description */
 	logicalrep_read_attrs(in, rel);
 
diff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c
index f59046ad620..0f106e83c79 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -196,6 +196,16 @@ logicalrep_relmap_update(LogicalRepRelation *remoterel)
 		entry->remoterel.atttyps[i] = remoterel->atttyps[i];
 	}
 	entry->remoterel.replident = remoterel->replident;
+
+	/*
+	 * XXX The walsender currently does not transmit the relkind of the remote
+	 * relation when replicating changes. Since we support replicating only
+	 * table changes at present, we default to initializing relkind as
+	 * RELKIND_RELATION.
+	 */
+	entry->remoterel.relkind = remoterel->relkind
+		? remoterel->relkind : RELKIND_RELATION;
+
 	entry->remoterel.attkeys = bms_copy(remoterel->attkeys);
 	MemoryContextSwitchTo(oldctx);
 }
@@ -425,6 +435,7 @@ logicalrep_rel_open(LogicalRepRelId remoteid, LOCKMODE lockmode)
 
 		/* Check for supported relkind. */
 		CheckSubscriptionRelkind(entry->localrel->rd_rel->relkind,
+								 remoterel->relkind,
 								 remoterel->nspname, remoterel->relname);
 
 		/*
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 1bb3ca01db0..510b9e9c50e 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -150,7 +150,8 @@ FetchRelationStates(bool *started_tx)
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2ba12517e93..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 3c58ad88476..d986ba2ea50 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -3367,6 +3367,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 	 * at CREATE/ALTER SUBSCRIPTION would be insufficient.
 	 */
 	CheckSubscriptionRelkind(partrel->rd_rel->relkind,
+							 relmapentry->remoterel.relkind,
 							 get_namespace_name(RelationGetNamespace(partrel)),
 							 RelationGetRelationName(partrel));
 
@@ -3563,6 +3564,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 
 					/* Check that new partition also has supported relkind. */
 					CheckSubscriptionRelkind(partrel_new->rd_rel->relkind,
+											 relmapentry->remoterel.relkind,
 											 get_namespace_name(RelationGetNamespace(partrel_new)),
 											 RelationGetRelationName(partrel_new));
 
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 847806b0a2e..05cc7512520 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1137,9 +1137,9 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
-	 * need to check all the given publication-table mappings and report an
-	 * error if any publications have a different column list.
+	 * fetch_relation_list. But one can later change the publication so we
+	 * still need to check all the given publication-table mappings and report
+	 * an error if any publications have a different column list.
 	 */
 	foreach(lc, publications)
 	{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ad37f9f6ed0..fa08059671b 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2319,11 +2319,11 @@ match_previous_words(int pattern_id,
 	/* ALTER SUBSCRIPTION <name> */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny))
 		COMPLETE_WITH("CONNECTION", "ENABLE", "DISABLE", "OWNER TO",
-					  "RENAME TO", "REFRESH PUBLICATION", "SET", "SKIP (",
-					  "ADD PUBLICATION", "DROP PUBLICATION");
-	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
-	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+					  "RENAME TO", "REFRESH PUBLICATION", "REFRESH SEQUENCES",
+					  "SET", "SKIP (", "ADD PUBLICATION", "DROP PUBLICATION");
+	/* ALTER SUBSCRIPTION <name> REFRESH */
+	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH"))
+		COMPLETE_WITH("PUBLICATION", "SEQUENCES");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..49deec052c6 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,7 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 3248e78cd28..0ba86c2ad72 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -784,8 +784,8 @@ extern void ExecSimpleRelationDelete(ResultRelInfo *resultRelInfo,
 									 TupleTableSlot *searchslot);
 extern void CheckCmdReplicaIdentity(Relation rel, CmdType cmd);
 
-extern void CheckSubscriptionRelkind(char relkind, const char *nspname,
-									 const char *relname);
+extern void CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+									 const char *nspname, const char *relname);
 
 /*
  * prototypes from functions in nodeModifyTable.c
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4e445fe0cd7..ecbddd12e1b 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4362,6 +4362,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..b92c39afa93
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,57 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are synced correctly to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Confirm sequences can be listed in pg_subscription_rel
+my $result = $node_subscriber->safe_psql(
+	'postgres',
+	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+);
+is ($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 377a7946585..bdf76d0324f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2363,6 +2363,7 @@ PublicationObjSpec
 PublicationObjSpecType
 PublicationPartOpt
 PublicationRelInfo
+PublicationRelKind
 PublicationSchemaInfo
 PublicationTable
 PublishGencolsType
-- 
2.47.3

v20251020_2-0002-New-worker-for-sequence-synchronization-.patchapplication/octet-stream; name=v20251020_2-0002-New-worker-for-sequence-synchronization-.patchDownload
From 608175a9f6be67f4cd4fb2de7a38a5127fd7c9a5 Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Mon, 20 Oct 2025 15:39:19 +0800
Subject: [PATCH v20251020_2 2/3] New worker for sequence synchronization
 during subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  23 +-
 src/backend/commands/subscriptioncmds.c       |   8 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  69 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 757 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 108 ++-
 src/backend/replication/logical/tablesync.c   |  85 +-
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   8 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 191 ++++-
 src/tools/pgindent/typedefs.list              |   2 +
 25 files changed, 1344 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index dffebb521f3..9d01df579aa 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..1f3ef004aa3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf46a543364..067c6c68ee8 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -955,8 +954,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1057,7 +1056,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1064,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1080,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1799,7 +1798,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e1eebe11658..765e00f6dfa 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1047,8 +1047,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				RemoveSubscriptionRel(sub->oid, relid);
 
 				/*
-				 * XXX Currently there is no sequencesync worker, so we only
-				 * stop tablesync workers.
+				 * A single sequencesync worker synchronizes all sequences, so
+				 * only stop workers when relation kind is not sequence.
 				 */
 				if (relkind != RELKIND_SEQUENCE)
 				{
@@ -1059,7 +1059,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 					sub_remove_rels = lappend(sub_remove_rels, rel);
 
-					logicalrep_worker_stop(sub->oid, relid);
+					logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 					/*
 					 * For READY state, we would have already dropped the
@@ -2087,7 +2087,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..a38a89509d0 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,23 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
+ * subscription id, relid and type.
  *
- * We are only interested in the leader apply worker or table sync worker.
+ * For apply workers and sequence sync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables and sequences. For table
+ * sync workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +272,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +333,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +422,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +512,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -628,15 +642,18 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
 
 /*
  * Stop the logical replication worker for subid/relid, if any.
+ *
+ * Similar to logicalrep_worker_find, relid should be set to a valid OID only
+ * for table sync workers.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +720,10 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid,
+									OidIsValid(relid)
+									? WORKERTYPE_TABLESYNC
+									: WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +855,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +922,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1299,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1636,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1676,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..d3e1cd057d2
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,757 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state DATASYNC, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT or DATASYNC state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT / DATASYNC → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(&has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Report discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+	}
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 510b9e9c50e..45ab805117b 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -46,8 +47,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,14 +65,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -85,7 +100,48 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -107,6 +163,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -116,18 +178,26 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready is declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
+	*has_pending_sequences = false;
 	*started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
@@ -138,6 +208,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -150,7 +221,7 @@ FetchRelationStates(bool *started_tx)
 		}
 
 		/* Fetch tables and sequences that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
@@ -159,7 +230,12 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -184,5 +260,7 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..8543d6c279d 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -375,11 +375,12 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	bool		started_tx = false;
 	bool		should_exit = false;
 	Relation	rel = NULL;
+	bool		has_pending_sequences;
 
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -413,6 +414,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +435,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +481,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +553,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1253,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1528,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1574,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1582,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1616,10 +1597,11 @@ bool
 AllTablesyncsReady(void)
 {
 	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
+	bool		has_pending_sequences;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_tables = FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	if (started_tx)
 	{
@@ -1631,7 +1613,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1647,9 +1629,10 @@ HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
+	bool		has_pending_sequences;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_subrels = FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d986ba2ea50..bf59778b42a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4134,7 +4157,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5577,7 +5603,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5697,8 +5724,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5809,6 +5836,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5828,14 +5859,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5906,6 +5939,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5918,9 +5955,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..4da7298502e 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b51d2b17379..8a2e1d1158a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 49deec052c6..88772a22b80 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,22 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..bcea652ef61 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..2db16bd7f84 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index ae352f6e691..a7c6588999f 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..c7bcc922ae8 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index b92c39afa93..e4a454598b0 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -29,7 +29,15 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -47,11 +55,184 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
 my $result = $node_subscriber->safe_psql(
-	'postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH SEQUENCES should cause sync of new sequences
+# of the publisher, and changes to existing sequences should also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
 );
-is ($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index bdf76d0324f..c8599aeb6b7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1629,6 +1629,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.47.3

v20251020_2-0003-Documentation-for-sequence-synchronizati.patchapplication/octet-stream; name=v20251020_2-0003-Documentation-for-sequence-synchronizati.patchDownload
From a3e7f34281ea5f7c00995cafd9150daef27b8b8f Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Mon, 20 Oct 2025 15:41:14 +0800
Subject: [PATCH v20251020_2 3/3] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 239 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 6 files changed, 285 insertions(+), 29 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a2a8b49fdb..9d54f8b26ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..548aab31960 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d5f0fb7ba7c..0b2402b6ea6 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2192,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.47.3

#414Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#413)
Re: Logical Replication of sequences

On Mon, Oct 20, 2025 at 4:59 AM Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Here is the latest patch set which addressed Shveta[1], Amit[2], Chao[3][4],
Dilip[5], Sawada-San's[6] comments.

I found the patch could not pass the sanity check, because 0001 missed a comma.
Also, there was a duplicated entry for `REFRESH SEQUENCES`.

Attached set fixed the issue.

Thank you for updating the patch! I have a few comments on the 0001 patch:

-       appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname,
gpt.attrs\n"
-                        "       FROM pg_class c\n"
+       appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname,
gpt.attrs");
+
+       if (support_relkind_seq)
+           appendStringInfo(&cmd, ", c.relkind\n");
+
+       appendStringInfo(&cmd, "   FROM pg_class c\n"
                         "         JOIN pg_namespace n ON n.oid =
c.relnamespace\n"
                         "         JOIN ( SELECT
(pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
                         "                FROM pg_publication\n"
                         "                WHERE pubname IN ( %s )) AS gpt\n"
                         "             ON gpt.relid = c.oid\n",
                         pub_names->data);

I think we don't necessarily need to avoid getting relkind for servers
older than 19. IIUC the reason why we had to have check_columnlist was
that the attnames column was introduced to pg_publication_tables
catalog. But I think this patch is not the case since we get relkind
from pg_class. That is, I think we can get 4 columns from server >=16.

---
 /*
- * Check and log a warning if the publisher has subscribed to the same table,
- * its partition ancestors (if it's a partition), or its partition children (if
- * it's a partitioned table), from some other publishers. This check is
- * required in the following scenarios:
+ * Check and log a warning if the publisher has subscribed to the same relation
+ * (table or sequence), its partition ancestors (if it's a partition), or its
+ * partition children (if it's a partitioned table), from some other
publishers.
+ * This check is required in the following scenarios:
  *
  * 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "copy_data = true" and "origin = none":
  *    - Warn the user that data with an origin might have been copied.
- *    - This check is skipped for tables already added, as incremental sync via
- *      WAL allows origin tracking. The list of such tables is in
- *      subrel_local_oids.
+ *    - This check is skipped for tables and sequences already added, as
+ *      incremental sync via WAL allows origin tracking. The list of
such tables
+ *      is in subrel_local_oids.
  *
  * 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
  *    statements with "retain_dead_tuples = true" and "origin = any", and for
@@ -2338,13 +2440,19 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  *    - Warn the user that only conflict detection info for local changes on
  *      the publisher is retained. Data from other origins may lack sufficient
  *      details for reliable conflict detection.
+ *    - This check targets for tables only.
  *    - See comments atop worker.c for more details.
+ *
+ * 3) For ALTER SUBSCRIPTION ... REFRESH SEQUENCE statements with "origin =
+ *    none":
+ *    - Warn the user that sequence data from another origin might have been
+ *      copied.
  */

While this function is well documented, I find it's quite complex, and
this patch adds to that complexity. The function has 9 arguments,
making it difficult to understand which combinations of arguments
enable which checks. For example, the function header comment doesn't
explain when to use the only_sequences parameter. At first, I thought
only_sequences should be set to true when checking if the publisher
has subscribed to sequences from other publishers, but looking at the
code, I discovered it doesn't check sequences when check_rdt is true:

+   if (walrcv_server_version(wrconn) < 190000 || check_rdt)
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, TRUE as istable FROM
pg_get_publication_tables(P.pubname))");
+   else if (only_sequences)
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, FALSE as istable FROM
pg_get_publication_sequences(P.pubname))");
+   else
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, TRUE as istable FROM
pg_get_publication_tables(P.pubname) UNION ALL"
+                        " SELECT relid, FALSE as istable FROM
pg_get_publication_sequences(P.pubname))");
+

I find that the complexity might stem from checking different cases in
one function, but I don't have better ideas to improve the logic for
now. I think we can at least describe what the caller can expect from
specifying only_sequence to true.

---
+    * apply workers initilization, and to handle origin creation dynamically

s/initilization/initialization/

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#415shveta malik
shveta.malik@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#413)
Re: Logical Replication of sequences

On Mon, Oct 20, 2025 at 5:29 PM Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Here is the latest patch set which addressed Shveta[1], Amit[2], Chao[3][4],
Dilip[5], Sawada-San's[6] comments.

I found the patch could not pass the sanity check, because 0001 missed a comma.
Also, there was a duplicated entry for `REFRESH SEQUENCES`.

Attached set fixed the issue.

Thanks for the patch. I tried to compile the doc of patch001 alone. It
did not go through, errors:

postgres.sgml:198: element xref: validity error : IDREF attribute
linkend references an unknown ID "sequence-definition-mismatches"
postgres.sgml:234: element xref: validity error : IDREF attribute
linkend references an unknown ID "sequence-definition-mismatches"
postgres.sgml:239: element xref: validity error : IDREF attribute
linkend references an unknown ID "sequences-out-of-sync"

thanks
Shveta

#416Amit Kapila
amit.kapila16@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#413)
Re: Logical Replication of sequences

On Mon, Oct 20, 2025 at 5:29 PM Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Here is the latest patch set which addressed Shveta[1], Amit[2], Chao[3][4],
Dilip[5], Sawada-San's[6] comments.

I found the patch could not pass the sanity check, because 0001 missed a comma.
Also, there was a duplicated entry for `REFRESH SEQUENCES`.

Attached set fixed the issue.

Some minor comments on 0001:

+
+# This tests that sequences are synced correctly to the subscriber

Is this comment okay for 0001? How about: This tests that sequences
are registered to be synced to the subscriber?

+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');

Do we need to set this additional GUC for 0001? I understand it could
be required for 0002 though.

--
With Regards,
Amit Kapila.

#417shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#416)
Re: Logical Replication of sequences

Few trivial comments on 001:

patch001:

1)
FetchRelationStates
  /* Fetch tables and sequences that are in non-ready state. */
- rstates = GetSubscriptionRelations(MySubscription->oid, true);
+ rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+    true);

We are passing get_Sequenecs as false, we should update comment to get
rid of 'sequences;

2)
+ /*
+ * XXX The walsender currently does not transmit the relkind of the remote
+ * relation when replicating changes. Since we support replicating only
+ * table changes at present, we default to initializing relkind as
+ * RELKIND_RELATION.
+ */
+ entry->remoterel.relkind = remoterel->relkind
+ ? remoterel->relkind : RELKIND_RELATION;

Shall we also mention that currently this is used or needed only for
CheckSubscriptionRelkind().

~~

Few on 002:

3)
sequencesync.c:

+ * Executing the following command resets all sequences in the subscription to
+ * state DATASYNC, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ * Sequence state transitions follow this pattern:
+ *   INIT / DATASYNC → READY

These comments need correction. We no longer use DATASYNC state for seq.

4)
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * (100) sequences are synchronized per transaction. The locks on the sequence

We can avoid writing 100 here. Mention of macro is enough.

5)
I had some 250 sequences; dropped one on pub, and ran 'refresh
sequences' on sub, noticed few issues:

a) Seqsync worker logged below but actually did not remove it. It
seems it is some pending log from previous implementation:

LOG: sequences not found on publisher removed from resynchronization:
("public.myseq250")

b) It kept on restarting the seq sync worker very fast. It should have
waited for wal_retrieve_retry_interval (which is 5s in my env) to
attempt restarting, isn't it? Logs:
----
14:27:53.492 IST [72327] LOG: logical replication sequence
synchronization worker for subscription "sub1" has started
14:27:53.499 IST [72327] LOG: logical replication sequence
synchronization for subscription "sub1" - total unsynchronized: 1
14:27:53.500 IST [72327] LOG: logical replication sequence
synchronization for subscription "sub1" - batch #1 = 1 attempted, 0
succeeded, 0 skipped, 0 mismatched, 0 insufficient permission, 1
missing from publisher
14:27:53.500 IST [72327] LOG: sequences not found on publisher
removed from resynchronization: ("public.myseq250")
14:27:53.500 IST [72327] LOG: logical replication sequence
synchronization worker for subscription "sub1" has finished
14:27:53.503 IST [72329] LOG: logical replication sequence
synchronization worker for subscription "sub1" has started
14:27:53.508 IST [72329] LOG: logical replication sequence
synchronization for subscription "sub1" - total unsynchronized: 1
----

c) Should this case give a suggestion/HINT in log to run 'REFERSH PUB'
to correct sequence mapping.

thanks
Shveta

#418vignesh C
vignesh21@gmail.com
In reply to: Masahiko Sawada (#414)
3 attachment(s)
Re: Logical Replication of sequences

On Tue, 21 Oct 2025 at 03:49, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Oct 20, 2025 at 4:59 AM Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Here is the latest patch set which addressed Shveta[1], Amit[2], Chao[3][4],
Dilip[5], Sawada-San's[6] comments.

I found the patch could not pass the sanity check, because 0001 missed a comma.
Also, there was a duplicated entry for `REFRESH SEQUENCES`.

Attached set fixed the issue.

Thank you for updating the patch! I have a few comments on the 0001 patch:

-       appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname,
gpt.attrs\n"
-                        "       FROM pg_class c\n"
+       appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname,
gpt.attrs");
+
+       if (support_relkind_seq)
+           appendStringInfo(&cmd, ", c.relkind\n");
+
+       appendStringInfo(&cmd, "   FROM pg_class c\n"
"         JOIN pg_namespace n ON n.oid =
c.relnamespace\n"
"         JOIN ( SELECT
(pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
"                FROM pg_publication\n"
"                WHERE pubname IN ( %s )) AS gpt\n"
"             ON gpt.relid = c.oid\n",
pub_names->data);

I think we don't necessarily need to avoid getting relkind for servers
older than 19. IIUC the reason why we had to have check_columnlist was
that the attnames column was introduced to pg_publication_tables
catalog. But I think this patch is not the case since we get relkind
from pg_class. That is, I think we can get 4 columns from server >=16.

Modified

---
/*
- * Check and log a warning if the publisher has subscribed to the same table,
- * its partition ancestors (if it's a partition), or its partition children (if
- * it's a partitioned table), from some other publishers. This check is
- * required in the following scenarios:
+ * Check and log a warning if the publisher has subscribed to the same relation
+ * (table or sequence), its partition ancestors (if it's a partition), or its
+ * partition children (if it's a partitioned table), from some other
publishers.
+ * This check is required in the following scenarios:
*
* 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
*    statements with "copy_data = true" and "origin = none":
*    - Warn the user that data with an origin might have been copied.
- *    - This check is skipped for tables already added, as incremental sync via
- *      WAL allows origin tracking. The list of such tables is in
- *      subrel_local_oids.
+ *    - This check is skipped for tables and sequences already added, as
+ *      incremental sync via WAL allows origin tracking. The list of
such tables
+ *      is in subrel_local_oids.
*
* 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
*    statements with "retain_dead_tuples = true" and "origin = any", and for
@@ -2338,13 +2440,19 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
*    - Warn the user that only conflict detection info for local changes on
*      the publisher is retained. Data from other origins may lack sufficient
*      details for reliable conflict detection.
+ *    - This check targets for tables only.
*    - See comments atop worker.c for more details.
+ *
+ * 3) For ALTER SUBSCRIPTION ... REFRESH SEQUENCE statements with "origin =
+ *    none":
+ *    - Warn the user that sequence data from another origin might have been
+ *      copied.
*/

While this function is well documented, I find it's quite complex, and
this patch adds to that complexity. The function has 9 arguments,
making it difficult to understand which combinations of arguments
enable which checks. For example, the function header comment doesn't
explain when to use the only_sequences parameter. At first, I thought
only_sequences should be set to true when checking if the publisher
has subscribed to sequences from other publishers, but looking at the
code, I discovered it doesn't check sequences when check_rdt is true:

+   if (walrcv_server_version(wrconn) < 190000 || check_rdt)
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, TRUE as istable FROM
pg_get_publication_tables(P.pubname))");
+   else if (only_sequences)
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, FALSE as istable FROM
pg_get_publication_sequences(P.pubname))");
+   else
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, TRUE as istable FROM
pg_get_publication_tables(P.pubname) UNION ALL"
+                        " SELECT relid, FALSE as istable FROM
pg_get_publication_sequences(P.pubname))");
+

I find that the complexity might stem from checking different cases in
one function, but I don't have better ideas to improve the logic for
now. I think we can at least describe what the caller can expect from
specifying only_sequence to true.

Split this function into check_publications_origin_sequences and
check_publications_origin_tables to reduce the complexity. After this
change we log two warnings if both tables and sequences are subscriber
to the same tables and sequences like:
WARNING: subscription "sub1" requested copy_data with origin = NONE
but might copy data that had a different origin
DETAIL: The subscription subscribes to a publication ("pub1") that
contains tables that are written to by other subscriptions.
HINT: Verify that initial data copied from the publisher tables did
not come from other origins.
WARNING: subscription "sub1" requested copy_data with origin = NONE
but might copy data that had a different origin
DETAIL: The subscription subscribes to a publication ("pub1") that
contains sequences that are written to by other subscriptions.
HINT: Verify that initial data copied from the publisher sequences
did not come from other origins.

I felt it is ok, just highlighting here as this behavior will be seen
in the new version. Here it will clearly mention that first message is
from tables and second one is from sequences.

---
+    * apply workers initilization, and to handle origin creation dynamically

s/initilization/initialization/

Modified

Also Amit's comments from [1]/messages/by-id/CAA4eK1Kobtqw3gf7RupcV=MYS53iHjgshVHQFTfL-n83XxucPg@mail.gmail.com, Shveta's comments from [2]/messages/by-id/CAJpy0uCvHmDdp7W1kDvVEwFT2+V9gnj5boE5+mtUqVknE9-bvQ@mail.gmail.com have been
handled and Shveta's first patch comments from [3]/messages/by-id/CAJpy0uDxms8ynrHWXHGFuxDLA7QzDLLASqpqdWnD=La8UJPt7Q@mail.gmail.com has been handled.

The attached patch has the changes for the same.

[1]: /messages/by-id/CAA4eK1Kobtqw3gf7RupcV=MYS53iHjgshVHQFTfL-n83XxucPg@mail.gmail.com
[2]: /messages/by-id/CAJpy0uCvHmDdp7W1kDvVEwFT2+V9gnj5boE5+mtUqVknE9-bvQ@mail.gmail.com
[3]: /messages/by-id/CAJpy0uDxms8ynrHWXHGFuxDLA7QzDLLASqpqdWnD=La8UJPt7Q@mail.gmail.com

Regards,
Vignesh

Attachments:

v20251021-0001-Introduce-REFRESH-SEQUENCES-for-subscripti.patchtext/x-patch; charset=US-ASCII; name=v20251021-0001-Introduce-REFRESH-SEQUENCES-for-subscripti.patchDownload
From 809801763b765c6ffff5fb827f6963d1f9327c88 Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Wed, 15 Oct 2025 16:57:15 +0800
Subject: [PATCH v20251021 1/3] Introduce "REFRESH SEQUENCES" for
 subscriptions.

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH SEQUENCES
This command updates the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

In addition to the new command, the following subscription commands have
been enhanced to automatically refresh sequence mappings:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION

These commands will perform the following actions:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side.

Note that the actual synchronization of sequence data/values will be
handled in a subsequent patch that introduces a dedicated sequence sync
worker.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                  |  29 +-
 doc/src/sgml/ref/alter_subscription.sgml    |  43 +-
 src/backend/catalog/pg_subscription.c       |  53 ++-
 src/backend/commands/subscriptioncmds.c     | 457 ++++++++++++++++----
 src/backend/executor/execReplication.c      |  27 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/proto.c     |   3 +
 src/backend/replication/logical/relation.c  |  12 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/logical/worker.c    |   2 +
 src/backend/replication/pgoutput/pgoutput.c |   6 +-
 src/bin/psql/tab-complete.in.c              |  10 +-
 src/include/catalog/pg_subscription_rel.h   |   3 +-
 src/include/executor/executor.h             |   4 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/subscription/meson.build           |   1 +
 src/test/subscription/t/036_sequences.pl    |  56 +++
 src/tools/pgindent/typedefs.list            |   1 +
 19 files changed, 592 insertions(+), 132 deletions(-)
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9b3aae8603b..6c8a0f173c9 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8199,16 +8199,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8242,7 +8245,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8251,12 +8254,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..95a24a60a38 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,9 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +159,41 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table and sequence information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +212,21 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..dffebb521f3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -284,7 +284,7 @@ AddSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u already exists",
+		elog(ERROR, "subscription relation %u in subscription %u already exists",
 			 relid, subid);
 
 	/* Form the tuple. */
@@ -478,9 +478,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * synchronization is in progress unless the caller updates the
 		 * corresponding subscription as well. This is to ensure that we don't
 		 * leave tablesync slots or origins in the system when the
-		 * corresponding table is dropped.
+		 * corresponding table is dropped. For sequences, however, it's ok to
+		 * drop them since no separate slots or origins are created during
+		 * synchronization.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE &&
+			subrel->srsubstate != SUBREL_STATE_READY)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +521,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subtables = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,14 +534,27 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		if (relkind == RELKIND_RELATION ||
+			relkind == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subtables = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
 	table_close(rel, AccessShareLock);
 
-	return has_subrels;
+	return has_subtables;
 }
 
 /*
@@ -547,7 +565,8 @@ HasSubscriptionTables(Oid subid)
  * returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool get_tables, bool get_sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +575,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'get_tables' and 'get_sequences' must be true. */
+	Assert(get_tables || get_sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +600,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !get_sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION || relkind == RELKIND_PARTITIONED_TABLE)
+			&& !get_tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 0f54686b699..73b068dd31c 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -106,12 +106,25 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
-static void check_publications_origin(WalReceiverConn *wrconn,
-									  List *publications, bool copydata,
-									  bool retain_dead_tuples, char *origin,
-									  Oid *subrel_local_oids, int subrel_count,
-									  char *subname);
+typedef struct PublicationRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} PublicationRelKind;
+
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
+static void check_publications_origin_sequences(WalReceiverConn *wrconn,
+												List *publications,
+												char *origin,
+												Oid *subrel_local_oids,
+												int subrel_count,
+												char *subname);
+static void check_publications_origin_tables(WalReceiverConn *wrconn,
+											 List *publications, bool copydata,
+											 bool retain_dead_tuples,
+											 char *origin,
+											 Oid *subrel_local_oids,
+											 int subrel_count, char *subname);
 static void check_pub_dead_tuple_retention(WalReceiverConn *wrconn);
 static void check_duplicates_in_publist(List *publist, Datum *datums);
 static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
@@ -736,20 +749,27 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * A replication origin is currently created for all subscriptions,
+	 * including those that only contain sequences or are otherwise empty.
+	 *
+	 * XXX: While this is technically unnecessary, optimizing it would require
+	 * additional logic to skip origin creation during DDL operations and
+	 * apply workers initialization, and to handle origin creation dynamically
+	 * when tables are added to the subscription. It is not clear whether
+	 * preventing creation of origins is worth additional complexity.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
 	/*
 	 * Connect to remote side to execute requested commands and fetch table
-	 * info.
+	 * and sequence info.
 	 */
 	if (opts.connect)
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,10 +784,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *pubrels;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
-			check_publications_origin(wrconn, publications, opts.copy_data,
-									  opts.retaindeadtuples, opts.origin,
-									  NULL, 0, stmt->subname);
+			check_publications_origin_tables(wrconn, publications, opts.copy_data,
+											 opts.retaindeadtuples, opts.origin,
+											 NULL, 0, stmt->subname);
+			check_publications_origin_sequences(wrconn, publications,
+												opts.origin, NULL, 0,
+												stmt->subname);
 
 			if (opts.retaindeadtuples)
 				check_pub_dead_tuple_retention(wrconn);
@@ -776,25 +803,28 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			pubrels = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				RangeVar   *rv = pubrelinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
+				CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 										 rv->schemaname, rv->relname);
-
-				AddSubscriptionRelState(subid, relid, table_state,
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +832,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * Similar to origins, it is not clear whether preventing the slot
+			 * creation for empty and sequence-only subscriptions is worth
+			 * additional complexity.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +859,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,13 +913,15 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
 	List	   *subrel_states;
 	Oid		   *subrel_local_oids;
+	Oid		   *subseq_local_oids;
 	Oid		   *pubrel_local_oids;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
+	int			tbl_count = 0;
+	int			seq_count = 0;
 	int			subrel_count;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
@@ -893,7 +929,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
+	List	   *sub_remove_rels = NIL;
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,71 +952,84 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted array of local relation oids for faster lookup. This
+		 * can potentially contain all relation in the database so speed of
+		 * lookup is important.
+		 *
+		 * We do not yet know the exact count of tables and sequences, so we
+		 * allocate separate arrays for table OIDs and sequence OIDs based on
+		 * the total number of relations (subrel_count).
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
-		off = 0;
+		subseq_local_oids = palloc(subrel_count * sizeof(Oid));
 		foreach(lc, subrel_states)
 		{
 			SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
 
-			subrel_local_oids[off++] = relstate->relid;
+			if (get_rel_relkind(relstate->relid) == RELKIND_SEQUENCE)
+				subseq_local_oids[seq_count++] = relstate->relid;
+			else
+				subrel_local_oids[tbl_count++] = relstate->relid;
 		}
-		qsort(subrel_local_oids, subrel_count,
+		qsort(subrel_local_oids, tbl_count,
 			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
+		check_publications_origin_tables(wrconn, sub->publications, copy_data,
+										 sub->retaindeadtuples, sub->origin,
+										 subrel_local_oids, tbl_count,
+										 sub->name);
 
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		qsort(subseq_local_oids, seq_count, sizeof(Oid), oid_cmp);
+		check_publications_origin_sequences(wrconn, sub->publications,
+											sub->origin, subseq_local_oids,
+											seq_count, sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known relations. If the relation is not known locally create a new
+		 * state for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = pubrelinfo->rv;
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+			relkind = get_rel_relkind(relid);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
+			CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 									 rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
 			if (!bsearch(&relid, subrel_local_oids,
-						 subrel_count, sizeof(Oid), oid_cmp))
+						 tbl_count, sizeof(Oid), oid_cmp) &&
+				!bsearch(&relid, subseq_local_oids,
+						 seq_count, sizeof(Oid), oid_cmp))
 			{
 				AddSubscriptionRelState(sub->oid, relid,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -987,19 +1037,19 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * Next remove state for tables we should not care about anymore using
 		 * the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
+		qsort(pubrel_local_oids, list_length(pubrels),
 			  sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
-		for (off = 0; off < subrel_count; off++)
+		for (off = 0; off < tbl_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				SubRemoveRels *remove_rel = palloc(sizeof(SubRemoveRels));
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,11 +1071,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
+				remove_rel->relid = relid;
+				remove_rel->state = state;
+
+				sub_remove_rels = lappend(sub_remove_rels, remove_rel);
+
 				logicalrep_worker_stop(sub->oid, relid);
 
 				/*
@@ -1064,10 +1116,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,11 +1133,39 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		/*
+		 * Next remove state for sequences we should not care about anymore
+		 * using the data we collected above
+		 */
+		for (off = 0; off < seq_count; off++)
+		{
+			Oid			relid = subseq_local_oids[off];
+
+			if (!bsearch(&relid, pubrel_local_oids,
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * This locking ensures that the state of rels won't change
+				 * till we are done with this refresh operation.
+				 */
+				if (!rel)
+					rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+				RemoveSubscriptionRel(sub->oid, relid);
+
+				ereport(DEBUG1,
+						errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
+			}
+		}
 	}
 	PG_FINALLY();
 	{
@@ -1097,6 +1177,57 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with INIT state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	List	   *subrel_states;
+	char	   *err = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("subscription \"%s\" could not connect to the publisher: %s",
+					   sub->name, err));
+
+	PG_TRY();
+	{
+		check_publications_origin_sequences(wrconn, sub->publications,
+											sub->origin, NULL, 0, sub->name);
+
+		/* Get local sequence list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+		foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+		{
+			Oid			relid = subrel->relid;
+
+			UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+									   InvalidXLogRecPtr, false);
+			ereport(DEBUG1,
+					errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+									get_namespace_name(get_rel_namespace(relid)),
+									get_rel_name(relid),
+									sub->name));
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1733,6 +1864,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("%s is not allowed for disabled subscriptions",
+								   "ALTER SUBSCRIPTION ... REFRESH SEQUENCES"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1824,9 +1968,9 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 			if (retain_dead_tuples)
 				check_pub_dead_tuple_retention(wrconn);
 
-			check_publications_origin(wrconn, sub->publications, false,
-									  retain_dead_tuples, origin, NULL, 0,
-									  sub->name);
+			check_publications_origin_tables(wrconn, sub->publications, false,
+											 retain_dead_tuples, origin, NULL, 0,
+											 sub->name);
 
 			if (update_failover || update_two_phase)
 				walrcv_alter_slot(wrconn, sub->slotname,
@@ -2008,7 +2152,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2317,6 +2461,106 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
 	table_close(rel, RowExclusiveLock);
 }
 
+static void
+check_publications_origin_sequences(WalReceiverConn *wrconn, List *publications,
+									char *origin, Oid *subrel_local_oids,
+									int subrel_count, char *subname)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	int			i;
+	Oid			tableRow[1] = {TEXTOID};
+	List	   *publist = NIL;
+
+	/* Enable sequence synchronization checks only when origin is 'none' */
+	if (!origin || pg_strcasecmp(origin, LOGICALREP_ORIGIN_NONE) != 0)
+		return;
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT P.pubname AS pubname\n"
+						   "FROM pg_publication P,\n"
+						   "     LATERAL pg_get_publication_sequences(P.pubname) GPR\n"
+						   "     JOIN pg_subscription_rel PS ON (GPR.relid = PS.srrelid),\n"
+						   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
+						   "WHERE C.oid = GPR.relid "
+						   "	 AND P.pubname IN (");
+
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoString(&cmd, ")\n");
+
+	/*
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relations that are already
+	 * present on the subscriber. This check should be skipped as these will
+	 * not be re-synced.
+	 */
+	for (i = 0; i < subrel_count; i++)
+	{
+		Oid			relid = subrel_local_oids[i];
+		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+		char	   *tablename = get_rel_name(relid);
+
+		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+							schemaname, tablename);
+	}
+
+	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not receive list of replicated relations from the publisher: %s",
+						res->err)));
+
+	/* Process relations. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *pubname;
+		bool		isnull;
+
+		pubname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		ExecClearTuple(slot);
+		publist = list_append_unique(publist, makeString(pubname));
+	}
+
+	/*
+	 * Log a warning if the publisher has subscribed to the same sequence from
+	 * some other publisher. We cannot know the origin of sequences data during
+	 * the initial sync.
+	 */
+	if (publist)
+	{
+		StringInfo	pubnames = makeStringInfo();
+		StringInfo	err_msg = makeStringInfo();
+		StringInfo	err_hint = makeStringInfo();
+
+		/* Prepare the list of publication(s) for warning message. */
+		GetPublicationsStr(publist, pubnames, false);
+
+		appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
+						 subname);
+		appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher sequences did not come from other origins."));
+
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_internal("%s", err_msg->data),
+				errdetail_plural("The subscription subscribes to a publication (%s) that contains sequences that are written to by other subscriptions.",
+								 "The subscription subscribes to publications (%s) that contain sequences that are written to by other subscriptions.",
+								 list_length(publist), pubnames->data),
+				errhint_internal("%s", err_hint->data));
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+}
+
 /*
  * Check and log a warning if the publisher has subscribed to the same table,
  * its partition ancestors (if it's a partition), or its partition children (if
@@ -2341,10 +2585,10 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  *    - See comments atop worker.c for more details.
  */
 static void
-check_publications_origin(WalReceiverConn *wrconn, List *publications,
-						  bool copydata, bool retain_dead_tuples,
-						  char *origin, Oid *subrel_local_oids,
-						  int subrel_count, char *subname)
+check_publications_origin_tables(WalReceiverConn *wrconn, List *publications,
+								 bool copydata, bool retain_dead_tuples,
+								 char *origin, Oid *subrel_local_oids,
+								 int subrel_count, char *subname)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
@@ -2594,8 +2838,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(PublicationRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2603,15 +2862,18 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, InvalidOid, CHAROID};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	bool		support_relkind = (server_version >= 160000);
+	bool		support_seq = (server_version >= 190000);
+	int			column_count = check_columnlist ? (support_relkind ? 4 : 3) : 2;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2619,7 +2881,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
 		tableRow[2] = INT2VECTOROID;
@@ -2637,14 +2899,23 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
-						 "       FROM pg_class c\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs, c.relkind\n"
+						 "   FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
 						 "                FROM pg_publication\n"
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (support_seq)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS attrs, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN (%s)",
+							 pub_names->data);
 	}
 	else
 	{
@@ -2662,7 +2933,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2678,22 +2949,32 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char		relkind = RELKIND_RELATION;
+		PublicationRelKind *relinfo = palloc_object(PublicationRelKind);
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		if (support_relkind)
+		{
+			relkind = DatumGetChar(slot_getattr(slot, 4, &isnull));
+			Assert(!isnull);
+		}
+
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE &&
+			check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2701,7 +2982,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..3f61714ea7f 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1112,18 +1112,35 @@ CheckCmdReplicaIdentity(Relation rel, CmdType cmd)
 
 
 /*
- * Check if we support writing into specific relkind.
+ * Check if we support writing into specific relkind of local relation and check
+ * if it aligns with the relkind of the relation on the publisher.
  *
  * The nspname and relname are only needed for error reporting.
  */
 void
-CheckSubscriptionRelkind(char relkind, const char *nspname,
-						 const char *relname)
+CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+						 const char *nspname, const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (localrelkind != RELKIND_RELATION &&
+		localrelkind != RELKIND_PARTITIONED_TABLE &&
+		localrelkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
 						nspname, relname),
-				 errdetail_relkind_not_supported(relkind)));
+				 errdetail_relkind_not_supported(localrelkind)));
+
+	/*
+	 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be treated
+	 * interchangeably, but ensure that sequences (RELKIND_SEQUENCE) match
+	 * exactly on both publisher and subscriber.
+	 */
+	if ((localrelkind == RELKIND_SEQUENCE && remoterelkind != RELKIND_SEQUENCE) ||
+		(localrelkind != RELKIND_SEQUENCE && remoterelkind == RELKIND_SEQUENCE))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg("relation \"%s.%s\" type mismatch: source \"%s\", target \"%s\"",
+					   nspname, relname,
+					   remoterelkind == RELKIND_SEQUENCE ? "sequence" : "table",
+					   localrelkind == RELKIND_SEQUENCE ? "sequence" : "table"));
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dc0c2886674..a4b29c822e8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10992,6 +10992,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 2436a263dc2..aa8409e0711 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -708,6 +708,9 @@ logicalrep_read_rel(StringInfo in)
 	/* Read the replica identity. */
 	rel->replident = pq_getmsgbyte(in);
 
+	/* rekind is not sent */
+	rel->relkind = 0;
+
 	/* Get attribute description */
 	logicalrep_read_attrs(in, rel);
 
diff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c
index f59046ad620..c66c3064196 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -196,6 +196,17 @@ logicalrep_relmap_update(LogicalRepRelation *remoterel)
 		entry->remoterel.atttyps[i] = remoterel->atttyps[i];
 	}
 	entry->remoterel.replident = remoterel->replident;
+
+	/*
+	 * XXX The walsender currently does not transmit the relkind of the remote
+	 * relation when replicating changes. Since we support replicating only
+	 * table changes at present, we default to initializing relkind as
+	 * RELKIND_RELATION. This is needed in CheckSubscriptionRelkind() to check
+	 * if the publisher and subscriber relation kinds are compatible.
+	 */
+	entry->remoterel.relkind = remoterel->relkind
+		? remoterel->relkind : RELKIND_RELATION;
+
 	entry->remoterel.attkeys = bms_copy(remoterel->attkeys);
 	MemoryContextSwitchTo(oldctx);
 }
@@ -425,6 +436,7 @@ logicalrep_rel_open(LogicalRepRelId remoteid, LOCKMODE lockmode)
 
 		/* Check for supported relkind. */
 		CheckSubscriptionRelkind(entry->localrel->rd_rel->relkind,
+								 remoterel->relkind,
 								 remoterel->nspname, remoterel->relname);
 
 		/*
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 1bb3ca01db0..e452a1e78d4 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -149,8 +149,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables and sequences that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2ba12517e93..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 3c58ad88476..d986ba2ea50 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -3367,6 +3367,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 	 * at CREATE/ALTER SUBSCRIPTION would be insufficient.
 	 */
 	CheckSubscriptionRelkind(partrel->rd_rel->relkind,
+							 relmapentry->remoterel.relkind,
 							 get_namespace_name(RelationGetNamespace(partrel)),
 							 RelationGetRelationName(partrel));
 
@@ -3563,6 +3564,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 
 					/* Check that new partition also has supported relkind. */
 					CheckSubscriptionRelkind(partrel_new->rd_rel->relkind,
+											 relmapentry->remoterel.relkind,
 											 get_namespace_name(RelationGetNamespace(partrel_new)),
 											 RelationGetRelationName(partrel_new));
 
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 847806b0a2e..05cc7512520 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1137,9 +1137,9 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
-	 * need to check all the given publication-table mappings and report an
-	 * error if any publications have a different column list.
+	 * fetch_relation_list. But one can later change the publication so we
+	 * still need to check all the given publication-table mappings and report
+	 * an error if any publications have a different column list.
 	 */
 	foreach(lc, publications)
 	{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ad37f9f6ed0..fa08059671b 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2319,11 +2319,11 @@ match_previous_words(int pattern_id,
 	/* ALTER SUBSCRIPTION <name> */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny))
 		COMPLETE_WITH("CONNECTION", "ENABLE", "DISABLE", "OWNER TO",
-					  "RENAME TO", "REFRESH PUBLICATION", "SET", "SKIP (",
-					  "ADD PUBLICATION", "DROP PUBLICATION");
-	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
-	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+					  "RENAME TO", "REFRESH PUBLICATION", "REFRESH SEQUENCES",
+					  "SET", "SKIP (", "ADD PUBLICATION", "DROP PUBLICATION");
+	/* ALTER SUBSCRIPTION <name> REFRESH */
+	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH"))
+		COMPLETE_WITH("PUBLICATION", "SEQUENCES");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..49deec052c6 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,7 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool get_tables,
+									  bool get_sequences, bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 3248e78cd28..0ba86c2ad72 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -784,8 +784,8 @@ extern void ExecSimpleRelationDelete(ResultRelInfo *resultRelInfo,
 									 TupleTableSlot *searchslot);
 extern void CheckCmdReplicaIdentity(Relation rel, CmdType cmd);
 
-extern void CheckSubscriptionRelkind(char relkind, const char *nspname,
-									 const char *relname);
+extern void CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+									 const char *nspname, const char *relname);
 
 /*
  * prototypes from functions in nodeModifyTable.c
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4e445fe0cd7..ecbddd12e1b 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4362,6 +4362,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..5c935b324eb
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,56 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are registered to be synced to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Confirm sequences can be listed in pg_subscription_rel
+my $result = $node_subscriber->safe_psql(
+	'postgres',
+	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+);
+is ($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 377a7946585..bdf76d0324f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2363,6 +2363,7 @@ PublicationObjSpec
 PublicationObjSpecType
 PublicationPartOpt
 PublicationRelInfo
+PublicationRelKind
 PublicationSchemaInfo
 PublicationTable
 PublishGencolsType
-- 
2.43.0

v20251021-0002-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=UTF-8; name=v20251021-0002-New-worker-for-sequence-synchronization-du.patchDownload
From 43656ae2c42a7bd5026727c1fe6f073c8c0bddca Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 21 Oct 2025 18:18:40 +0530
Subject: [PATCH v20251021 2/3] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  23 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  69 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 757 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 110 ++-
 src/backend/replication/logical/tablesync.c   |  85 +-
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   8 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 194 ++++-
 src/tools/pgindent/typedefs.list              |   2 +
 25 files changed, 1345 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index dffebb521f3..9d01df579aa 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..1f3ef004aa3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf46a543364..067c6c68ee8 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -955,8 +954,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1057,7 +1056,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1064,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1080,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1799,7 +1798,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 73b068dd31c..62741995596 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1078,7 +1078,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				sub_remove_rels = lappend(sub_remove_rels, remove_rel);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 				/*
 				 * For READY state, we would have already dropped the
@@ -2129,7 +2129,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..a38a89509d0 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,23 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
+ * subscription id, relid and type.
  *
- * We are only interested in the leader apply worker or table sync worker.
+ * For apply workers and sequence sync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables and sequences. For table
+ * sync workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +272,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +333,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +422,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +512,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -628,15 +642,18 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
 
 /*
  * Stop the logical replication worker for subid/relid, if any.
+ *
+ * Similar to logicalrep_worker_find, relid should be set to a valid OID only
+ * for table sync workers.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +720,10 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid,
+									OidIsValid(relid)
+									? WORKERTYPE_TABLESYNC
+									: WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +855,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +922,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1299,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1636,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1676,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..3b161aeed75
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,757 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state INIT, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(&has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Report discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s).",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s).",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+	}
+
+	/* Log missing sequences if any */
+	if (missing_seqs->len)
+		ereport(LOG,
+				errmsg_internal("sequences not found on publisher removed from resynchronization: (%s)",
+								missing_seqs->data));
+
+	/* Report errors if mismatches or permission issues occurred */
+	if (insuffperm_seqs->len || mismatched_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index e452a1e78d4..45ab805117b 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -46,8 +47,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,14 +65,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -85,7 +100,48 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -107,6 +163,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -116,18 +178,26 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready is declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
+	*has_pending_sequences = false;
 	*started_tx = false;
 
 	if (relation_states_validity != SYNC_RELATIONS_STATE_VALID)
@@ -138,6 +208,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,8 +220,8 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
@@ -159,7 +230,12 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -184,5 +260,7 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..8543d6c279d 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -375,11 +375,12 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	bool		started_tx = false;
 	bool		should_exit = false;
 	Relation	rel = NULL;
+	bool		has_pending_sequences;
 
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -413,6 +414,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +435,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +481,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +553,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1253,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1528,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1574,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1582,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1616,10 +1597,11 @@ bool
 AllTablesyncsReady(void)
 {
 	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
+	bool		has_pending_sequences;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_tables = FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	if (started_tx)
 	{
@@ -1631,7 +1613,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1647,9 +1629,10 @@ HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
 	bool		has_subrels;
+	bool		has_pending_sequences;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_subrels = FetchRelationStates(&has_pending_sequences, &started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d986ba2ea50..bf59778b42a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1242,7 +1247,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1364,7 +1372,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1420,7 +1431,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1486,7 +1500,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1621,7 +1638,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2463,7 +2483,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3284,7 +3307,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4134,7 +4157,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5577,7 +5603,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5697,8 +5724,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5809,6 +5836,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5828,14 +5859,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5906,6 +5939,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5918,9 +5955,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..4da7298502e 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b51d2b17379..8a2e1d1158a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 49deec052c6..88772a22b80 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,22 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..bcea652ef61 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..2db16bd7f84 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index ae352f6e691..a7c6588999f 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..c7bcc922ae8 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index 5c935b324eb..e4a454598b0 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -1,7 +1,7 @@
 
 # Copyright (c) 2025, PostgreSQL Global Development Group
 
-# This tests that sequences are registered to be synced to the subscriber
+# This tests that sequences are synced correctly to the subscriber
 use strict;
 use warnings;
 use PostgreSQL::Test::Cluster;
@@ -14,6 +14,7 @@ my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
 # Avoid checkpoint during the test, otherwise, extra values will be fetched for
 # the sequences which will cause the test to fail randomly.
 $node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
 $node_publisher->start;
 
 # Initialize subscriber node
@@ -28,7 +29,15 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -46,11 +55,184 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
 my $result = $node_subscriber->safe_psql(
-	'postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH SEQUENCES should cause sync of new sequences
+# of the publisher, and changes to existing sequences should also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t',
+	'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
 );
-is ($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+# Confirm that the warning for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the warning for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/LOG:  ? sequences not found on publisher removed from resynchronization: \("public.regress_s5"\)/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index bdf76d0324f..c8599aeb6b7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1629,6 +1629,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251021-0003-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20251021-0003-Documentation-for-sequence-synchronization.patchDownload
From f7699c43e6f059b4d970b3e2ca1d66ffcf6918ab Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Mon, 20 Oct 2025 15:41:14 +0800
Subject: [PATCH v20251021 3/3] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C
Reviewer:  Amit Kapila, Shveta Malik, Dilip Kumar, Peter Smith, Nisha Moond
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 239 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 300 insertions(+), 29 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a2a8b49fdb..9d54f8b26ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..548aab31960 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d5f0fb7ba7c..0b2402b6ea6 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2192,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 95a24a60a38..b0bd4a7cf5d 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -194,6 +194,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           see <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -224,6 +230,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       does not add or remove the missing publication sequences from the
       subscription.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

#419Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#418)
Re: Logical Replication of sequences

Hi Vignesh, Here are a few small review comments just for the docs
part of patch 0001:

======
Commit message

1.
In addition to the new command, the following subscription commands have
been enhanced to automatically refresh sequence mappings:

~

Is "mappings" the right word?

======
doc/src/sgml/ref/alter_subscription.sgml

2.
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be
executed separately.
-          The default is <literal>true</literal>.
+          When false, the command will not try to refresh table and sequence
+          information. <literal>REFRESH PUBLICATION</literal> should then be
+          executed separately. The default is <literal>true</literal>.
          </para>

I thought false should use markup: <literal>false</literal> (just
like true does)

~~~

3.
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          see <link linkend="sql-altersubscription-params-refresh-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+         </para>

/To do that, see/To do that, use/

~~~

4.
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <literal>ALTER SUBSCRIPTION ... REFRESH
PUBLICATION</literal></link> which
+      only synchronizes newly added sequences, <literal>REFRESH
SEQUENCES</literal>
+      will re-synchronize the sequence data for all subscribed sequences. It
+      does not add or remove the missing publication sequences from the
+      subscription.
+     </para>

4a.
IMO the ALTER SUBSCRIPTION command should have <command> markup
instead of <literal>

~

4b.
I wondered if that part "...for all subscribed sequences" should say
"...for all currently subscribed sequences", just for more clarity.

~

4c.
I do not think you need to say "the missing" here; IMO, that just
leads to more questions -- missing from where? How do you remove
something that is missing? etc. The suggestion below removes the
ambiguity.

SUGGESTION:
It does not add or remove sequences from the subscription to match the
publication.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#420Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#418)
1 attachment(s)
Re: Logical Replication of sequences

On Tue, Oct 21, 2025 at 8:11 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 21 Oct 2025 at 03:49, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

---
/*
- * Check and log a warning if the publisher has subscribed to the same table,
- * its partition ancestors (if it's a partition), or its partition children (if
- * it's a partitioned table), from some other publishers. This check is
- * required in the following scenarios:
+ * Check and log a warning if the publisher has subscribed to the same relation
+ * (table or sequence), its partition ancestors (if it's a partition), or its
+ * partition children (if it's a partitioned table), from some other
publishers.
+ * This check is required in the following scenarios:
*
* 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
*    statements with "copy_data = true" and "origin = none":
*    - Warn the user that data with an origin might have been copied.
- *    - This check is skipped for tables already added, as incremental sync via
- *      WAL allows origin tracking. The list of such tables is in
- *      subrel_local_oids.
+ *    - This check is skipped for tables and sequences already added, as
+ *      incremental sync via WAL allows origin tracking. The list of
such tables
+ *      is in subrel_local_oids.
*
* 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
*    statements with "retain_dead_tuples = true" and "origin = any", and for
@@ -2338,13 +2440,19 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
*    - Warn the user that only conflict detection info for local changes on
*      the publisher is retained. Data from other origins may lack sufficient
*      details for reliable conflict detection.
+ *    - This check targets for tables only.
*    - See comments atop worker.c for more details.
+ *
+ * 3) For ALTER SUBSCRIPTION ... REFRESH SEQUENCE statements with "origin =
+ *    none":
+ *    - Warn the user that sequence data from another origin might have been
+ *      copied.
*/

While this function is well documented, I find it's quite complex, and
this patch adds to that complexity. The function has 9 arguments,
making it difficult to understand which combinations of arguments
enable which checks. For example, the function header comment doesn't
explain when to use the only_sequences parameter. At first, I thought
only_sequences should be set to true when checking if the publisher
has subscribed to sequences from other publishers, but looking at the
code, I discovered it doesn't check sequences when check_rdt is true:

+   if (walrcv_server_version(wrconn) < 190000 || check_rdt)
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, TRUE as istable FROM
pg_get_publication_tables(P.pubname))");
+   else if (only_sequences)
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, FALSE as istable FROM
pg_get_publication_sequences(P.pubname))");
+   else
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, TRUE as istable FROM
pg_get_publication_tables(P.pubname) UNION ALL"
+                        " SELECT relid, FALSE as istable FROM
pg_get_publication_sequences(P.pubname))");
+

I find that the complexity might stem from checking different cases in
one function, but I don't have better ideas to improve the logic for
now. I think we can at least describe what the caller can expect from
specifying only_sequence to true.

Split this function into check_publications_origin_sequences and
check_publications_origin_tables to reduce the complexity. After this
change we log two warnings if both tables and sequences are subscriber
to the same tables and sequences like:

I think the case where both WARNINGs will be displayed is rare so it
should be okay as it simplifies the code quite a bit. Another thing is
we need to query twice but as this happens during DDL and only for
very specific cases that should also be okay. We can anyway merge
these later if we see any problem with it but for now it would be
better to prefer code simplicity.

When check_publications_origin_sequences() is called from Alter
Subscription ... Refresh Publication ... or Create Subscription ...
code path then shouldn't we check copy_data as well along with origin
as none? Because, if copy_data is false, we should have added a
sequence in the READY state, so we don't need to fetch its values.

I have added a few comments in this new function and made a number of
other cosmetic improvements in the attached.

--
With Regards,
Amit Kapila.

Attachments:

v1_amit_1.patch.txttext/plain; charset=US-ASCII; name=v1_amit_1.patch.txtDownload
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 73b068dd31c..267b898dca1 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -113,18 +113,18 @@ typedef struct PublicationRelKind
 } PublicationRelKind;
 
 static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
-static void check_publications_origin_sequences(WalReceiverConn *wrconn,
-												List *publications,
-												char *origin,
-												Oid *subrel_local_oids,
-												int subrel_count,
-												char *subname);
 static void check_publications_origin_tables(WalReceiverConn *wrconn,
 											 List *publications, bool copydata,
 											 bool retain_dead_tuples,
 											 char *origin,
 											 Oid *subrel_local_oids,
 											 int subrel_count, char *subname);
+static void check_publications_origin_sequences(WalReceiverConn* wrconn,
+												List *publications,
+												char *origin,
+												Oid *subrel_local_oids,
+												int subrel_count,
+												char *subname);
 static void check_pub_dead_tuple_retention(WalReceiverConn *wrconn);
 static void check_duplicates_in_publist(List *publist, Datum *datums);
 static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
@@ -979,9 +979,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 			else
 				subrel_local_oids[tbl_count++] = relstate->relid;
 		}
-		qsort(subrel_local_oids, tbl_count,
-			  sizeof(Oid), oid_cmp);
 
+		qsort(subrel_local_oids, tbl_count, sizeof(Oid), oid_cmp);
 		check_publications_origin_tables(wrconn, sub->publications, copy_data,
 										 sub->retaindeadtuples, sub->origin,
 										 subrel_local_oids, tbl_count,
@@ -2461,106 +2460,6 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
 	table_close(rel, RowExclusiveLock);
 }
 
-static void
-check_publications_origin_sequences(WalReceiverConn *wrconn, List *publications,
-									char *origin, Oid *subrel_local_oids,
-									int subrel_count, char *subname)
-{
-	WalRcvExecResult *res;
-	StringInfoData cmd;
-	TupleTableSlot *slot;
-	int			i;
-	Oid			tableRow[1] = {TEXTOID};
-	List	   *publist = NIL;
-
-	/* Enable sequence synchronization checks only when origin is 'none' */
-	if (!origin || pg_strcasecmp(origin, LOGICALREP_ORIGIN_NONE) != 0)
-		return;
-
-	initStringInfo(&cmd);
-	appendStringInfoString(&cmd,
-						   "SELECT DISTINCT P.pubname AS pubname\n"
-						   "FROM pg_publication P,\n"
-						   "     LATERAL pg_get_publication_sequences(P.pubname) GPR\n"
-						   "     JOIN pg_subscription_rel PS ON (GPR.relid = PS.srrelid),\n"
-						   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
-						   "WHERE C.oid = GPR.relid "
-						   "	 AND P.pubname IN (");
-
-	GetPublicationsStr(publications, &cmd, true);
-	appendStringInfoString(&cmd, ")\n");
-
-	/*
-	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
-	 * subrel_local_oids contains the list of relations that are already
-	 * present on the subscriber. This check should be skipped as these will
-	 * not be re-synced.
-	 */
-	for (i = 0; i < subrel_count; i++)
-	{
-		Oid			relid = subrel_local_oids[i];
-		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
-		char	   *tablename = get_rel_name(relid);
-
-		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
-							schemaname, tablename);
-	}
-
-	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
-	pfree(cmd.data);
-
-	if (res->status != WALRCV_OK_TUPLES)
-		ereport(ERROR,
-				(errcode(ERRCODE_CONNECTION_FAILURE),
-				 errmsg("could not receive list of replicated relations from the publisher: %s",
-						res->err)));
-
-	/* Process relations. */
-	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
-	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
-	{
-		char	   *pubname;
-		bool		isnull;
-
-		pubname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
-		Assert(!isnull);
-
-		ExecClearTuple(slot);
-		publist = list_append_unique(publist, makeString(pubname));
-	}
-
-	/*
-	 * Log a warning if the publisher has subscribed to the same sequence from
-	 * some other publisher. We cannot know the origin of sequences data during
-	 * the initial sync.
-	 */
-	if (publist)
-	{
-		StringInfo	pubnames = makeStringInfo();
-		StringInfo	err_msg = makeStringInfo();
-		StringInfo	err_hint = makeStringInfo();
-
-		/* Prepare the list of publication(s) for warning message. */
-		GetPublicationsStr(publist, pubnames, false);
-
-		appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
-						 subname);
-		appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher sequences did not come from other origins."));
-
-		ereport(WARNING,
-				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				errmsg_internal("%s", err_msg->data),
-				errdetail_plural("The subscription subscribes to a publication (%s) that contains sequences that are written to by other subscriptions.",
-								 "The subscription subscribes to publications (%s) that contain sequences that are written to by other subscriptions.",
-								 list_length(publist), pubnames->data),
-				errhint_internal("%s", err_hint->data));
-	}
-
-	ExecDropSingleTupleTableSlot(slot);
-
-	walrcv_clear_result(res);
-}
-
 /*
  * Check and log a warning if the publisher has subscribed to the same table,
  * its partition ancestors (if it's a partition), or its partition children (if
@@ -2726,6 +2625,116 @@ check_publications_origin_tables(WalReceiverConn *wrconn, List *publications,
 	walrcv_clear_result(res);
 }
 
+/*
+ * This function is similar to check_publications_origin_tables and serves
+ * same purpose for sequences.
+ *
+ * In addition to the checks where check_publications_origin_tables is used,
+ * this function is also used for ALTER SUBSCRIPTION ... REFRESH SEQUENCES.
+ */
+static void
+check_publications_origin_sequences(WalReceiverConn* wrconn, List* publications,
+									char *origin, Oid *subrel_local_oids,
+									int subrel_count, char *subname)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	int			i;
+	Oid			tableRow[1] = { TEXTOID };
+	List *publist = NIL;
+
+	/*
+	 * Enable sequence synchronization checks only when origin is 'none', to
+	 * ensure that sequence data from other origins is not inadvertently
+	 * copied.
+	 */
+	if (!origin || pg_strcasecmp(origin, LOGICALREP_ORIGIN_NONE) != 0)
+		return;
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT P.pubname AS pubname\n"
+						   "FROM pg_publication P,\n"
+						   "     LATERAL pg_get_publication_sequences(P.pubname) GPR\n"
+						   "     JOIN pg_subscription_rel PS ON (GPR.relid = PS.srrelid),\n"
+						   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
+						   "WHERE C.oid = GPR.relid AND P.pubname IN (");
+
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoString(&cmd, ")\n");
+
+	/*
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relations that are already
+	 * present on the subscriber. This check should be skipped as these will
+	 * not be re-synced.
+	 */
+	for (i = 0; i < subrel_count; i++)
+	{
+		Oid			relid = subrel_local_oids[i];
+		char		*schemaname = get_namespace_name(get_rel_namespace(relid));
+		char		*seqname = get_rel_name(relid);
+
+		appendStringInfo(&cmd, "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+						 schemaname, seqname);
+	}
+
+	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not receive list of replicated relations from the publisher: %s",
+						res->err)));
+
+	/* Process relations. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char* pubname;
+		bool		isnull;
+
+		pubname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		ExecClearTuple(slot);
+		publist = list_append_unique(publist, makeString(pubname));
+	}
+
+	/*
+	 * Log a warning if the publisher has subscribed to the same sequence from
+	 * some other publisher. We cannot know the origin of sequences data during
+	 * the initial sync.
+	 */
+	if (publist)
+	{
+		StringInfo	pubnames = makeStringInfo();
+		StringInfo	err_msg = makeStringInfo();
+		StringInfo	err_hint = makeStringInfo();
+
+		/* Prepare the list of publication(s) for warning message. */
+		GetPublicationsStr(publist, pubnames, false);
+
+		appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
+						 subname);
+		appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher sequences did not come from other origins."));
+
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_internal("%s", err_msg->data),
+				errdetail_plural("The subscription subscribes to a publication (%s) that contains sequences that are written to by other subscriptions.",
+								 "The subscription subscribes to publications (%s) that contain sequences that are written to by other subscriptions.",
+								 list_length(publist), pubnames->data),
+				errhint_internal("%s", err_hint->data));
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+}
+
 /*
  * Determine whether the retain_dead_tuples can be enabled based on the
  * publisher's status.
#421Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#418)
Re: Logical Replication of sequences

Hi Vignesh,

Here are a few review comments for patch 0001:

======
src/backend/catalog/pg_subscription.c

GetSubscriptionRelations:

1.
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool otherwise return all the
relations of the subscription, bool get_sequences,
+ bool not_ready)

Now the function parameter means the (unchanged) function comment is
not correct anymore.

e.g. It still says "otherwise return all the relations of the
subscription", but that does not account for the parameters indicating
if you only want tables, or only want sequences.

======
src/backend/commands/subscriptioncmds.c

2.
+typedef struct PublicationRelKind
+{
+ RangeVar   *rv;
+ char relkind;
+} PublicationRelKind;
+

Is this deserving of a comment to saying what it is for?

~~~

CreateSubscription:

3.
+ *
+ * Similar to origins, it is not clear whether preventing the slot
+ * creation for empty and sequence-only subscriptions is worth
+ * additional complexity.
  */

I think this "Similar to..." comment needs "XXX", same as the earlier
comment it is referring to.

~~~

AlterSubscription_refresh:

4.
+ * Build qsorted array of local relation oids for faster lookup. This
+ * can potentially contain all relation in the database so speed of
+ * lookup is important.
+ *

/all relation/all relations/

~~~

5.
- qsort(subrel_local_oids, subrel_count,
+ qsort(subrel_local_oids, tbl_count,
    sizeof(Oid), oid_cmp);
- check_publications_origin(wrconn, sub->publications, copy_data,
-   sub->retaindeadtuples, sub->origin,
-   subrel_local_oids, subrel_count, sub->name);
+ check_publications_origin_tables(wrconn, sub->publications, copy_data,
+ sub->retaindeadtuples, sub->origin,
+ subrel_local_oids, tbl_count,
+ sub->name);
- /*
- * Rels that we want to remove from subscription and drop any slots
- * and origins corresponding to them.
- */
- sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+ qsort(subseq_local_oids, seq_count, sizeof(Oid), oid_cmp);
+ check_publications_origin_sequences(wrconn, sub->publications,
+ sub->origin, subseq_local_oids,
+ seq_count, sub->name);

The first qsort wrapping and the blank line spacing are a bit inconsistent here.

~~~

6.
+static void
+check_publications_origin_sequences(WalReceiverConn *wrconn, List
*publications,
+ char *origin, Oid *subrel_local_oids,
+ int subrel_count, char *subname)

Deserving of a function comment?

~

7.
+ for (i = 0; i < subrel_count; i++)
+ {

Declare 'i' as a for-loop variable.

~

8.
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not receive list of replicated relations from the
publisher: %s",
+ res->err)));
+
+ /* Process relations. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))

Since this function is just for sequences, should it say "replicated
sequences" instead of "relations" here?

Similarly, should the "Process relations" maybe say "Process sequence
relations" or "Process sequences"?

~~~

fetch_table_list:

9.
+ appendStringInfo(&cmd,
+ "UNION ALL\n"
+ "  SELECT DISTINCT s.schemaname, s.sequencename, NULL::int2vector AS
attrs, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind\n"
+ "  FROM pg_catalog.pg_publication_sequences s\n"
+ "  WHERE s.pubname IN (%s)",
+ pub_names->data);

Missing a final \n?

======
src/backend/replication/logical/syncutils.c

10.
bool
FetchRelationStates(bool *started_tx)

IIUC, you are using the terminology "relations" to apply to both
tables or sequences; OTOH ", tables" is just tables. It seems this
function is table-specific. Should it have a name change like
FetchTableStates?

======
src/test/subscription/t/036_sequences.pl

11.
There are no test cases checking that ALTER commands will correctly
add/remove sequences in the pg_subscription_rel?

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#422shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#420)
Re: Logical Replication of sequences

On Wed, Oct 22, 2025 at 10:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think the case where both WARNINGs will be displayed is rare so it
should be okay as it simplifies the code quite a bit. Another thing is
we need to query twice but as this happens during DDL and only for
very specific cases that should also be okay. We can anyway merge
these later if we see any problem with it but for now it would be
better to prefer code simplicity.

+1

Few trivial comments on 001:

1)
In fetch_relation_list(), I feel support_relkind is misleading as now
we are unconditionally supporting fetching relkind once the version >=
16. We can make the function work without having this variable.

2)
+ * Build qsorted array of local relation oids for faster lookup. This
+ * can potentially contain all relation in the database so speed of
+ * lookup is important.

Since we are building multiple arrays now, we can change comment to:
Build qsorted arrays of local table oids and sequence oids for faster
lookup. This can potentially contain all tables and sequences in the
database so speed of lookup is important.

thanks
Shveta

#423Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#421)
Re: Logical Replication of sequences

On Wed, Oct 22, 2025 at 10:53 AM Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Here are a few review comments for patch 0001:

======
src/backend/catalog/pg_subscription.c

GetSubscriptionRelations:

1.
List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool otherwise return all the
relations of the subscription, bool get_sequences,
+ bool not_ready)

Now the function parameter means the (unchanged) function comment is
not correct anymore.

e.g. It still says "otherwise return all the relations of the
subscription", but that does not account for the parameters indicating
if you only want tables, or only want sequences.

I think the code for those parameters is quite clear. The previous
version had few comments but I found those unnecessary.

src/backend/replication/logical/syncutils.c

10.
bool
FetchRelationStates(bool *started_tx)

IIUC, you are using the terminology "relations" to apply to both
tables or sequences; OTOH ", tables" is just tables. It seems this
function is table-specific. Should it have a name change like
FetchTableStates?

The patch 0002 has some changes related to sequences in this function.
So, I think the current naming is okay.

======
src/test/subscription/t/036_sequences.pl

11.
There are no test cases checking that ALTER commands will correctly
add/remove sequences in the pg_subscription_rel?

The patch 0002 has tests related to ALTER commands. I don't think 0001
needs tests for all kinds of statements as both will be committed a
few days apart.

--
With Regards,
Amit Kapila.

#424Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#419)
Re: Logical Replication of sequences

On Wed, Oct 22, 2025 at 7:52 AM Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh, Here are a few small review comments just for the docs
part of patch 0001:

======
Commit message

1.
In addition to the new command, the following subscription commands have
been enhanced to automatically refresh sequence mappings:

~

Is "mappings" the right word?

I think so because pg_subscription_rel has mappings of relations in
publisher and subscriber.

--
With Regards,
Amit Kapila.

#425vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#420)
3 attachment(s)
Re: Logical Replication of sequences

On Wed, 22 Oct 2025 at 10:36, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 21, 2025 at 8:11 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, 21 Oct 2025 at 03:49, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

---
/*
- * Check and log a warning if the publisher has subscribed to the same table,
- * its partition ancestors (if it's a partition), or its partition children (if
- * it's a partitioned table), from some other publishers. This check is
- * required in the following scenarios:
+ * Check and log a warning if the publisher has subscribed to the same relation
+ * (table or sequence), its partition ancestors (if it's a partition), or its
+ * partition children (if it's a partitioned table), from some other
publishers.
+ * This check is required in the following scenarios:
*
* 1) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
*    statements with "copy_data = true" and "origin = none":
*    - Warn the user that data with an origin might have been copied.
- *    - This check is skipped for tables already added, as incremental sync via
- *      WAL allows origin tracking. The list of such tables is in
- *      subrel_local_oids.
+ *    - This check is skipped for tables and sequences already added, as
+ *      incremental sync via WAL allows origin tracking. The list of
such tables
+ *      is in subrel_local_oids.
*
* 2) For CREATE SUBSCRIPTION and ALTER SUBSCRIPTION ... REFRESH PUBLICATION
*    statements with "retain_dead_tuples = true" and "origin = any", and for
@@ -2338,13 +2440,19 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
*    - Warn the user that only conflict detection info for local changes on
*      the publisher is retained. Data from other origins may lack sufficient
*      details for reliable conflict detection.
+ *    - This check targets for tables only.
*    - See comments atop worker.c for more details.
+ *
+ * 3) For ALTER SUBSCRIPTION ... REFRESH SEQUENCE statements with "origin =
+ *    none":
+ *    - Warn the user that sequence data from another origin might have been
+ *      copied.
*/

While this function is well documented, I find it's quite complex, and
this patch adds to that complexity. The function has 9 arguments,
making it difficult to understand which combinations of arguments
enable which checks. For example, the function header comment doesn't
explain when to use the only_sequences parameter. At first, I thought
only_sequences should be set to true when checking if the publisher
has subscribed to sequences from other publishers, but looking at the
code, I discovered it doesn't check sequences when check_rdt is true:

+   if (walrcv_server_version(wrconn) < 190000 || check_rdt)
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, TRUE as istable FROM
pg_get_publication_tables(P.pubname))");
+   else if (only_sequences)
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, FALSE as istable FROM
pg_get_publication_sequences(P.pubname))");
+   else
+       appendStringInfo(&cmd, query,
+                        "(SELECT relid, TRUE as istable FROM
pg_get_publication_tables(P.pubname) UNION ALL"
+                        " SELECT relid, FALSE as istable FROM
pg_get_publication_sequences(P.pubname))");
+

I find that the complexity might stem from checking different cases in
one function, but I don't have better ideas to improve the logic for
now. I think we can at least describe what the caller can expect from
specifying only_sequence to true.

Split this function into check_publications_origin_sequences and
check_publications_origin_tables to reduce the complexity. After this
change we log two warnings if both tables and sequences are subscriber
to the same tables and sequences like:

I think the case where both WARNINGs will be displayed is rare so it
should be okay as it simplifies the code quite a bit. Another thing is
we need to query twice but as this happens during DDL and only for
very specific cases that should also be okay. We can anyway merge
these later if we see any problem with it but for now it would be
better to prefer code simplicity.

I agree.

When check_publications_origin_sequences() is called from Alter
Subscription ... Refresh Publication ... or Create Subscription ...
code path then shouldn't we check copy_data as well along with origin
as none? Because, if copy_data is false, we should have added a
sequence in the READY state, so we don't need to fetch its values.

Fixed this.

I have added a few comments in this new function and made a number of
other cosmetic improvements in the attached.

Thanks for the changes, I merged it.

The attached patch has the changes for the same.
I have also addressed the other comments: a) Shveta's comments at [1]/messages/by-id/CAJpy0uDesLXjpDiDs6fA8HMr419D2YrXb7tA10e9Bp+uCypZ_Q@mail.gmail.com
b) Peter's comments at [2]/messages/by-id/CAHut+Ptp+FMHgS-kaKZwWEoNeyomecUDGECvoCpfx4_eCUDyUA@mail.gmail.com & [3]/messages/by-id/CAHut+Pvx945dJGhMtf2Rv5p8Xn4xQke65MfO-UwK3cRPnsXFRQ@mail.gmail.com c) Shveta's 2nd patch comments at [4]/messages/by-id/CAJpy0uDxms8ynrHWXHGFuxDLA7QzDLLASqpqdWnD=La8UJPt7Q@mail.gmail.com
and d) Chao's comment#12 from [5]/messages/by-id/598FC353-8E9A-4857-A125-740BE24DCBEB@gmail.com which was pending.

[1]: /messages/by-id/CAJpy0uDesLXjpDiDs6fA8HMr419D2YrXb7tA10e9Bp+uCypZ_Q@mail.gmail.com
[2]: /messages/by-id/CAHut+Ptp+FMHgS-kaKZwWEoNeyomecUDGECvoCpfx4_eCUDyUA@mail.gmail.com
[3]: /messages/by-id/CAHut+Pvx945dJGhMtf2Rv5p8Xn4xQke65MfO-UwK3cRPnsXFRQ@mail.gmail.com
[4]: /messages/by-id/CAJpy0uDxms8ynrHWXHGFuxDLA7QzDLLASqpqdWnD=La8UJPt7Q@mail.gmail.com
[5]: /messages/by-id/598FC353-8E9A-4857-A125-740BE24DCBEB@gmail.com

Regards,
Vignesh

Attachments:

v20251023-0001-Introduce-REFRESH-SEQUENCES-for-subscripti.patchapplication/octet-stream; name=v20251023-0001-Introduce-REFRESH-SEQUENCES-for-subscripti.patchDownload
From b4fcd19189d5bea6af798e14195b58968f0b637b Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Wed, 15 Oct 2025 16:57:15 +0800
Subject: [PATCH v20251023 1/3] Introduce "REFRESH SEQUENCES" for
 subscriptions.

This patch adds support for a new SQL command:
ALTER SUBSCRIPTION ... REFRESH SEQUENCES
This command updates the sequence entries present in the
pg_subscription_rel catalog table with the INIT state to trigger
resynchronization.

In addition to the new command, the following subscription commands have
been enhanced to automatically refresh sequence mappings:
ALTER SUBSCRIPTION ... REFRESH PUBLICATION
ALTER SUBSCRIPTION ... ADD PUBLICATION
ALTER SUBSCRIPTION ... DROP PUBLICATION
ALTER SUBSCRIPTION ... SET PUBLICATION

These commands will perform the following actions:
Add newly published sequences that are not yet part of the subscription.
Remove sequences that are no longer included in the publication.

This ensures that sequence replication remains aligned with the current
state of the publication on the publisher side.

Note that the actual synchronization of sequence data/values will be
handled in a subsequent patch that introduces a dedicated sequence sync
worker.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 doc/src/sgml/catalogs.sgml                  |  29 +-
 doc/src/sgml/ref/alter_subscription.sgml    |  44 +-
 src/backend/catalog/pg_subscription.c       |  53 ++-
 src/backend/commands/subscriptioncmds.c     | 480 ++++++++++++++++----
 src/backend/executor/execReplication.c      |  28 +-
 src/backend/parser/gram.y                   |   9 +
 src/backend/replication/logical/proto.c     |   3 +
 src/backend/replication/logical/relation.c  |  12 +
 src/backend/replication/logical/syncutils.c |   5 +-
 src/backend/replication/logical/tablesync.c |   2 +-
 src/backend/replication/logical/worker.c    |   2 +
 src/backend/replication/pgoutput/pgoutput.c |   6 +-
 src/bin/psql/tab-complete.in.c              |  10 +-
 src/include/catalog/pg_subscription_rel.h   |   3 +-
 src/include/executor/executor.h             |   4 +-
 src/include/nodes/parsenodes.h              |   1 +
 src/test/subscription/meson.build           |   1 +
 src/test/subscription/t/036_sequences.pl    |  55 +++
 src/tools/pgindent/typedefs.list            |   1 +
 19 files changed, 608 insertions(+), 140 deletions(-)
 create mode 100644 src/test/subscription/t/036_sequences.pl

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 9b3aae8603b..6c8a0f173c9 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8199,16 +8199,19 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
   </indexterm>
 
   <para>
-   The catalog <structname>pg_subscription_rel</structname> contains the
-   state for each replicated relation in each subscription.  This is a
-   many-to-many mapping.
+   The catalog <structname>pg_subscription_rel</structname> stores the
+   state of each replicated table and sequence for each subscription.  This
+   is a many-to-many mapping.
   </para>
 
   <para>
-   This catalog only contains tables known to the subscription after running
-   either <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link> or
-   <link linkend="sql-altersubscription"><command>ALTER SUBSCRIPTION ... REFRESH
-   PUBLICATION</command></link>.
+   This catalog contains tables and sequences known to the subscription
+   after running:
+   <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>,
+   <link linkend="sql-altersubscription-params-refresh-publication">
+   <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>, or
+   <link linkend="sql-altersubscription-params-refresh-sequences">
+   <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
   </para>
 
   <table>
@@ -8242,7 +8245,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
@@ -8251,12 +8254,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        <structfield>srsubstate</structfield> <type>char</type>
       </para>
       <para>
-       State code:
+       State code for the table or sequence.
+      </para>
+      <para>
+       State codes for tables:
        <literal>i</literal> = initialize,
        <literal>d</literal> = data is being copied,
        <literal>f</literal> = finished table copy,
        <literal>s</literal> = synchronized,
        <literal>r</literal> = ready (normal replication)
+      </para>
+      <para>
+       State codes for sequences:
+       <literal>i</literal> = initialize,
+       <literal>r</literal> = ready
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 12f72ba3167..d55e52dcc14 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -26,6 +26,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET PUBLICA
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ADD PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DROP PUBLICATION <replaceable class="parameter">publication_name</replaceable> [, ...] [ WITH ( <replaceable class="parameter">publication_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH PUBLICATION [ WITH ( <replaceable class="parameter">refresh_option</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> REFRESH SEQUENCES
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> ENABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> DISABLE
 ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">subscription_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
@@ -139,9 +140,10 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
         <term><literal>refresh</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          When false, the command will not try to refresh table information.
-          <literal>REFRESH PUBLICATION</literal> should then be executed separately.
-          The default is <literal>true</literal>.
+          When <literal>false</literal>, the command will not try to refresh
+          table and sequence information. <literal>REFRESH PUBLICATION</literal>
+          should then be executed separately. The default is
+          <literal>true</literal>.
          </para>
         </listitem>
        </varlistentry>
@@ -158,30 +160,41 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     <term><literal>REFRESH PUBLICATION</literal></term>
     <listitem>
      <para>
-      Fetch missing table information from publisher.  This will start
+      Fetch missing table and sequence information from the publisher.  This will start
       replication of tables that were added to the subscribed-to publications
       since <link linkend="sql-createsubscription">
       <command>CREATE SUBSCRIPTION</command></link> or
       the last invocation of <command>REFRESH PUBLICATION</command>.
      </para>
 
+     <para>
+      The system catalog <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+      is updated to record all tables and sequences known to the subscription,
+      that are still part of the publication.
+     </para>
+
      <para>
       <replaceable>refresh_option</replaceable> specifies additional options for the
-      refresh operation.  The supported options are:
+      refresh operation.  The only supported option is:
 
       <variablelist>
        <varlistentry>
         <term><literal>copy_data</literal> (<type>boolean</type>)</term>
         <listitem>
          <para>
-          Specifies whether to copy pre-existing data in the publications
-          that are being subscribed to when the replication starts.
-          The default is <literal>true</literal>.
+          Specifies whether to copy pre-existing data for tables and synchronize
+          sequences in the publications that are being subscribed to when the replication
+          starts. The default is <literal>true</literal>.
          </para>
          <para>
           Previously subscribed tables are not copied, even if a table's row
           filter <literal>WHERE</literal> clause has since been modified.
          </para>
+         <para>
+          Previously subscribed sequences are not re-synchronized. To do that,
+          use <link linkend="sql-altersubscription-params-refresh-sequences">
+          <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -200,6 +213,21 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
     </listitem>
    </varlistentry>
 
+   <varlistentry id="sql-altersubscription-params-refresh-sequences">
+    <term><literal>REFRESH SEQUENCES</literal></term>
+    <listitem>
+     <para>
+      Re-synchronize sequence data with the publisher. Unlike
+      <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link> which
+      only has the ability to synchronize newly added sequences,
+      <literal>REFRESH SEQUENCES</literal> will re-synchronize the sequence
+      data for all currently subscribed sequences. It does not add or remove
+      sequences from the subscription to match the publication.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="sql-altersubscription-params-enable">
     <term><literal>ENABLE</literal></term>
     <listitem>
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index e06587b0265..15b233a37d8 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -284,7 +284,7 @@ AddSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u already exists",
+		elog(ERROR, "subscription relation %u in subscription %u already exists",
 			 relid, subid);
 
 	/* Form the tuple. */
@@ -478,9 +478,13 @@ RemoveSubscriptionRel(Oid subid, Oid relid)
 		 * synchronization is in progress unless the caller updates the
 		 * corresponding subscription as well. This is to ensure that we don't
 		 * leave tablesync slots or origins in the system when the
-		 * corresponding table is dropped.
+		 * corresponding table is dropped. For sequences, however, it's ok to
+		 * drop them since no separate slots or origins are created during
+		 * synchronization.
 		 */
-		if (!OidIsValid(subid) && subrel->srsubstate != SUBREL_STATE_READY)
+		if (!OidIsValid(subid) &&
+			subrel->srsubstate != SUBREL_STATE_READY &&
+			get_rel_relkind(subrel->srrelid) != RELKIND_SEQUENCE)
 		{
 			ereport(ERROR,
 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -517,7 +521,8 @@ HasSubscriptionTables(Oid subid)
 	Relation	rel;
 	ScanKeyData skey[1];
 	SysScanDesc scan;
-	bool		has_subrels;
+	HeapTuple	tup;
+	bool		has_subtables = false;
 
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
@@ -529,14 +534,27 @@ HasSubscriptionTables(Oid subid)
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 1, skey);
 
-	/* If even a single tuple exists then the subscription has tables. */
-	has_subrels = HeapTupleIsValid(systable_getnext(scan));
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+		relkind = get_rel_relkind(subrel->srrelid);
+
+		if (relkind == RELKIND_RELATION ||
+			relkind == RELKIND_PARTITIONED_TABLE)
+		{
+			has_subtables = true;
+			break;
+		}
+	}
 
 	/* Cleanup */
 	systable_endscan(scan);
 	table_close(rel, AccessShareLock);
 
-	return has_subrels;
+	return has_subtables;
 }
 
 /*
@@ -547,7 +565,8 @@ HasSubscriptionTables(Oid subid)
  * returned list is palloc'ed in the current memory context.
  */
 List *
-GetSubscriptionRelations(Oid subid, bool not_ready)
+GetSubscriptionRelations(Oid subid, bool tables, bool sequences,
+						 bool not_ready)
 {
 	List	   *res = NIL;
 	Relation	rel;
@@ -556,6 +575,9 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 	ScanKeyData skey[2];
 	SysScanDesc scan;
 
+	/* One or both of 'tables' and 'sequences' must be true. */
+	Assert(tables || sequences);
+
 	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
 
 	ScanKeyInit(&skey[nkeys++],
@@ -578,9 +600,24 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
 		SubscriptionRelState *relstate;
 		Datum		d;
 		bool		isnull;
+		char		relkind;
 
 		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
 
+		/* Relation is either a sequence or a table */
+		relkind = get_rel_relkind(subrel->srrelid);
+		Assert(relkind == RELKIND_SEQUENCE || relkind == RELKIND_RELATION ||
+			   relkind == RELKIND_PARTITIONED_TABLE);
+
+		/* Skip sequences if they were not requested */
+		if ((relkind == RELKIND_SEQUENCE) && !sequences)
+			continue;
+
+		/* Skip tables if they were not requested */
+		if ((relkind == RELKIND_RELATION ||
+			 relkind == RELKIND_PARTITIONED_TABLE) && !tables)
+			continue;
+
 		relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
 		relstate->relid = subrel->srrelid;
 		relstate->state = subrel->srsubstate;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 0f54686b699..a0974d71de1 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -106,12 +106,29 @@ typedef struct SubOpts
 	XLogRecPtr	lsn;
 } SubOpts;
 
-static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
-static void check_publications_origin(WalReceiverConn *wrconn,
-									  List *publications, bool copydata,
-									  bool retain_dead_tuples, char *origin,
-									  Oid *subrel_local_oids, int subrel_count,
-									  char *subname);
+/*
+ * PublicationRelKind represents a relation included in a publication.
+ * It stores the schema-qualified relation name (rv) and its kind (relkind).
+ */
+typedef struct PublicationRelKind
+{
+	RangeVar   *rv;
+	char		relkind;
+} PublicationRelKind;
+
+static List *fetch_relation_list(WalReceiverConn *wrconn, List *publications);
+static void check_publications_origin_tables(WalReceiverConn *wrconn,
+											 List *publications, bool copydata,
+											 bool retain_dead_tuples,
+											 char *origin,
+											 Oid *subrel_local_oids,
+											 int subrel_count, char *subname);
+static void check_publications_origin_sequences(WalReceiverConn *wrconn,
+												List *publications,
+												bool copydata, char *origin,
+												Oid *subrel_local_oids,
+												int subrel_count,
+												char *subname);
 static void check_pub_dead_tuple_retention(WalReceiverConn *wrconn);
 static void check_duplicates_in_publist(List *publist, Datum *datums);
 static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
@@ -736,20 +753,27 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
 
+	/*
+	 * A replication origin is currently created for all subscriptions,
+	 * including those that only contain sequences or are otherwise empty.
+	 *
+	 * XXX: While this is technically unnecessary, optimizing it would require
+	 * additional logic to skip origin creation during DDL operations and
+	 * apply workers initialization, and to handle origin creation dynamically
+	 * when tables are added to the subscription. It is not clear whether
+	 * preventing creation of origins is worth additional complexity.
+	 */
 	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
 	replorigin_create(originname);
 
 	/*
 	 * Connect to remote side to execute requested commands and fetch table
-	 * info.
+	 * and sequence info.
 	 */
 	if (opts.connect)
 	{
 		char	   *err;
 		WalReceiverConn *wrconn;
-		List	   *tables;
-		ListCell   *lc;
-		char		table_state;
 		bool		must_use_password;
 
 		/* Try to connect to the publisher. */
@@ -764,10 +788,18 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 
 		PG_TRY();
 		{
+			bool		has_tables = false;
+			List	   *pubrels;
+			char		relation_state;
+
 			check_publications(wrconn, publications);
-			check_publications_origin(wrconn, publications, opts.copy_data,
-									  opts.retaindeadtuples, opts.origin,
-									  NULL, 0, stmt->subname);
+			check_publications_origin_tables(wrconn, publications,
+											 opts.copy_data,
+											 opts.retaindeadtuples, opts.origin,
+											 NULL, 0, stmt->subname);
+			check_publications_origin_sequences(wrconn, publications,
+												opts.copy_data, opts.origin,
+												NULL, 0, stmt->subname);
 
 			if (opts.retaindeadtuples)
 				check_pub_dead_tuple_retention(wrconn);
@@ -776,25 +808,28 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * Set sync state based on if we were asked to do data copy or
 			 * not.
 			 */
-			table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+			relation_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
 
 			/*
-			 * Get the table list from publisher and build local table status
-			 * info.
+			 * Build local relation status info. Relations are for both tables
+			 * and sequences from the publisher.
 			 */
-			tables = fetch_table_list(wrconn, publications);
-			foreach(lc, tables)
+			pubrels = fetch_relation_list(wrconn, publications);
+
+			foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 			{
-				RangeVar   *rv = (RangeVar *) lfirst(lc);
 				Oid			relid;
+				char		relkind;
+				RangeVar   *rv = pubrelinfo->rv;
 
 				relid = RangeVarGetRelid(rv, AccessShareLock, false);
+				relkind = get_rel_relkind(relid);
 
 				/* Check for supported relkind. */
-				CheckSubscriptionRelkind(get_rel_relkind(relid),
+				CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 										 rv->schemaname, rv->relname);
-
-				AddSubscriptionRelState(subid, relid, table_state,
+				has_tables |= (relkind != RELKIND_SEQUENCE);
+				AddSubscriptionRelState(subid, relid, relation_state,
 										InvalidXLogRecPtr, true);
 			}
 
@@ -802,6 +837,10 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 			 * If requested, create permanent slot for the subscription. We
 			 * won't use the initial snapshot for anything, so no need to
 			 * export it.
+			 *
+			 * XXX: Similar to origins, it is not clear whether preventing the
+			 * slot creation for empty and sequence-only subscriptions is
+			 * worth additional complexity.
 			 */
 			if (opts.create_slot)
 			{
@@ -825,7 +864,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 				 * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
 				 * PUBLICATION to work.
 				 */
-				if (opts.twophase && !opts.copy_data && tables != NIL)
+				if (opts.twophase && !opts.copy_data && has_tables)
 					twophase_enabled = true;
 
 				walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -879,21 +918,24 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 						  List *validate_publications)
 {
 	char	   *err;
-	List	   *pubrel_names;
+	List	   *pubrels = NIL;
+	Oid		   *pubrel_local_oids;
 	List	   *subrel_states;
+	List	   *sub_remove_rels = NIL;
 	Oid		   *subrel_local_oids;
-	Oid		   *pubrel_local_oids;
+	Oid		   *subseq_local_oids;
+	int			subrel_count;
 	ListCell   *lc;
 	int			off;
-	int			remove_rel_len;
-	int			subrel_count;
+	int			tbl_count = 0;
+	int			seq_count = 0;
 	Relation	rel = NULL;
 	typedef struct SubRemoveRels
 	{
 		Oid			relid;
 		char		state;
 	} SubRemoveRels;
-	SubRemoveRels *sub_remove_rels;
+
 	WalReceiverConn *wrconn;
 	bool		must_use_password;
 
@@ -915,71 +957,84 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		if (validate_publications)
 			check_publications(wrconn, validate_publications);
 
-		/* Get the table list from publisher. */
-		pubrel_names = fetch_table_list(wrconn, sub->publications);
+		/* Get the relation list from publisher. */
+		pubrels = fetch_relation_list(wrconn, sub->publications);
 
-		/* Get local table list. */
-		subrel_states = GetSubscriptionRelations(sub->oid, false);
+		/* Get local relation list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, true, true, false);
 		subrel_count = list_length(subrel_states);
 
 		/*
-		 * Build qsorted array of local table oids for faster lookup. This can
-		 * potentially contain all tables in the database so speed of lookup
-		 * is important.
+		 * Build qsorted arrays of local table oids and sequence oids for
+		 * faster lookup. This can potentially contain all tables and
+		 * sequences in the database so speed of lookup is important.
+		 *
+		 * We do not yet know the exact count of tables and sequences, so we
+		 * allocate separate arrays for table OIDs and sequence OIDs based on
+		 * the total number of relations (subrel_count).
 		 */
 		subrel_local_oids = palloc(subrel_count * sizeof(Oid));
-		off = 0;
+		subseq_local_oids = palloc(subrel_count * sizeof(Oid));
 		foreach(lc, subrel_states)
 		{
 			SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
 
-			subrel_local_oids[off++] = relstate->relid;
+			if (get_rel_relkind(relstate->relid) == RELKIND_SEQUENCE)
+				subseq_local_oids[seq_count++] = relstate->relid;
+			else
+				subrel_local_oids[tbl_count++] = relstate->relid;
 		}
-		qsort(subrel_local_oids, subrel_count,
-			  sizeof(Oid), oid_cmp);
 
-		check_publications_origin(wrconn, sub->publications, copy_data,
-								  sub->retaindeadtuples, sub->origin,
-								  subrel_local_oids, subrel_count, sub->name);
+		qsort(subrel_local_oids, tbl_count, sizeof(Oid), oid_cmp);
+		check_publications_origin_tables(wrconn, sub->publications, copy_data,
+										 sub->retaindeadtuples, sub->origin,
+										 subrel_local_oids, tbl_count,
+										 sub->name);
 
-		/*
-		 * Rels that we want to remove from subscription and drop any slots
-		 * and origins corresponding to them.
-		 */
-		sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+		qsort(subseq_local_oids, seq_count, sizeof(Oid), oid_cmp);
+		check_publications_origin_sequences(wrconn, sub->publications,
+											copy_data, sub->origin,
+											subseq_local_oids, seq_count,
+											sub->name);
 
 		/*
-		 * Walk over the remote tables and try to match them to locally known
-		 * tables. If the table is not known locally create a new state for
-		 * it.
+		 * Walk over the remote relations and try to match them to locally
+		 * known relations. If the relation is not known locally create a new
+		 * state for it.
 		 *
-		 * Also builds array of local oids of remote tables for the next step.
+		 * Also builds array of local oids of remote relations for the next
+		 * step.
 		 */
 		off = 0;
-		pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+		pubrel_local_oids = palloc(list_length(pubrels) * sizeof(Oid));
 
-		foreach(lc, pubrel_names)
+		foreach_ptr(PublicationRelKind, pubrelinfo, pubrels)
 		{
-			RangeVar   *rv = (RangeVar *) lfirst(lc);
+			RangeVar   *rv = pubrelinfo->rv;
 			Oid			relid;
+			char		relkind;
 
 			relid = RangeVarGetRelid(rv, AccessShareLock, false);
+			relkind = get_rel_relkind(relid);
 
 			/* Check for supported relkind. */
-			CheckSubscriptionRelkind(get_rel_relkind(relid),
+			CheckSubscriptionRelkind(relkind, pubrelinfo->relkind,
 									 rv->schemaname, rv->relname);
 
 			pubrel_local_oids[off++] = relid;
 
 			if (!bsearch(&relid, subrel_local_oids,
-						 subrel_count, sizeof(Oid), oid_cmp))
+						 tbl_count, sizeof(Oid), oid_cmp) &&
+				!bsearch(&relid, subseq_local_oids,
+						 seq_count, sizeof(Oid), oid_cmp))
 			{
 				AddSubscriptionRelState(sub->oid, relid,
 										copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 										InvalidXLogRecPtr, true);
 				ereport(DEBUG1,
-						(errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
-										 rv->schemaname, rv->relname, sub->name)));
+						errmsg_internal("%s \"%s.%s\" added to subscription \"%s\"",
+										relkind == RELKIND_SEQUENCE ? "sequence" : "table",
+										rv->schemaname, rv->relname, sub->name));
 			}
 		}
 
@@ -987,19 +1042,18 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * Next remove state for tables we should not care about anymore using
 		 * the data we collected above
 		 */
-		qsort(pubrel_local_oids, list_length(pubrel_names),
-			  sizeof(Oid), oid_cmp);
+		qsort(pubrel_local_oids, list_length(pubrels), sizeof(Oid), oid_cmp);
 
-		remove_rel_len = 0;
-		for (off = 0; off < subrel_count; off++)
+		for (off = 0; off < tbl_count; off++)
 		{
 			Oid			relid = subrel_local_oids[off];
 
 			if (!bsearch(&relid, pubrel_local_oids,
-						 list_length(pubrel_names), sizeof(Oid), oid_cmp))
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
 			{
 				char		state;
 				XLogRecPtr	statelsn;
+				SubRemoveRels *remove_rel = palloc(sizeof(SubRemoveRels));
 
 				/*
 				 * Lock pg_subscription_rel with AccessExclusiveLock to
@@ -1021,11 +1075,13 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				/* Last known rel state. */
 				state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
 
-				sub_remove_rels[remove_rel_len].relid = relid;
-				sub_remove_rels[remove_rel_len++].state = state;
-
 				RemoveSubscriptionRel(sub->oid, relid);
 
+				remove_rel->relid = relid;
+				remove_rel->state = state;
+
+				sub_remove_rels = lappend(sub_remove_rels, remove_rel);
+
 				logicalrep_worker_stop(sub->oid, relid);
 
 				/*
@@ -1064,10 +1120,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		 * to be at the end because otherwise if there is an error while doing
 		 * the database operations we won't be able to rollback dropped slots.
 		 */
-		for (off = 0; off < remove_rel_len; off++)
+		foreach_ptr(SubRemoveRels, rel, sub_remove_rels)
 		{
-			if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
-				sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+			if (rel->state != SUBREL_STATE_READY &&
+				rel->state != SUBREL_STATE_SYNCDONE)
 			{
 				char		syncslotname[NAMEDATALEN] = {0};
 
@@ -1081,11 +1137,39 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 				 * dropped slots and fail. For these reasons, we allow
 				 * missing_ok = true for the drop.
 				 */
-				ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+				ReplicationSlotNameForTablesync(sub->oid, rel->relid,
 												syncslotname, sizeof(syncslotname));
 				ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
 			}
 		}
+
+		/*
+		 * Next remove state for sequences we should not care about anymore
+		 * using the data we collected above
+		 */
+		for (off = 0; off < seq_count; off++)
+		{
+			Oid			relid = subseq_local_oids[off];
+
+			if (!bsearch(&relid, pubrel_local_oids,
+						 list_length(pubrels), sizeof(Oid), oid_cmp))
+			{
+				/*
+				 * This locking ensures that the state of rels won't change
+				 * till we are done with this refresh operation.
+				 */
+				if (!rel)
+					rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+				RemoveSubscriptionRel(sub->oid, relid);
+
+				ereport(DEBUG1,
+						errmsg_internal("sequence \"%s.%s\" removed from subscription \"%s\"",
+										get_namespace_name(get_rel_namespace(relid)),
+										get_rel_name(relid),
+										sub->name));
+			}
+		}
 	}
 	PG_FINALLY();
 	{
@@ -1097,6 +1181,58 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 		table_close(rel, NoLock);
 }
 
+/*
+ * Marks all sequences with INIT state.
+ */
+static void
+AlterSubscription_refresh_seq(Subscription *sub)
+{
+	char	   *err = NULL;
+	WalReceiverConn *wrconn;
+	bool		must_use_password;
+
+	/* Load the library providing us libpq calls. */
+	load_file("libpqwalreceiver", false);
+
+	/* Try to connect to the publisher. */
+	must_use_password = sub->passwordrequired && !sub->ownersuperuser;
+	wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,
+							sub->name, &err);
+	if (!wrconn)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("subscription \"%s\" could not connect to the publisher: %s",
+					   sub->name, err));
+
+	PG_TRY();
+	{
+		List	   *subrel_states;
+
+		check_publications_origin_sequences(wrconn, sub->publications, true,
+											sub->origin, NULL, 0, sub->name);
+
+		/* Get local sequence list. */
+		subrel_states = GetSubscriptionRelations(sub->oid, false, true, false);
+		foreach_ptr(SubscriptionRelState, subrel, subrel_states)
+		{
+			Oid			relid = subrel->relid;
+
+			UpdateSubscriptionRelState(sub->oid, relid, SUBREL_STATE_INIT,
+									   InvalidXLogRecPtr, false);
+			ereport(DEBUG1,
+					errmsg_internal("sequence \"%s.%s\" of subscription \"%s\" set to INIT state",
+									get_namespace_name(get_rel_namespace(relid)),
+									get_rel_name(relid),
+									sub->name));
+		}
+	}
+	PG_FINALLY();
+	{
+		walrcv_disconnect(wrconn);
+	}
+	PG_END_TRY();
+}
+
 /*
  * Common checks for altering failover, two_phase, and retain_dead_tuples
  * options.
@@ -1733,6 +1869,19 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 				break;
 			}
 
+		case ALTER_SUBSCRIPTION_REFRESH_SEQUENCES:
+			{
+				if (!sub->enabled)
+					ereport(ERROR,
+							errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+							errmsg("%s is not allowed for disabled subscriptions",
+								   "ALTER SUBSCRIPTION ... REFRESH SEQUENCES"));
+
+				AlterSubscription_refresh_seq(sub);
+
+				break;
+			}
+
 		case ALTER_SUBSCRIPTION_SKIP:
 			{
 				parse_subscription_options(pstate, stmt->options, SUBOPT_LSN, &opts);
@@ -1824,9 +1973,9 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,
 			if (retain_dead_tuples)
 				check_pub_dead_tuple_retention(wrconn);
 
-			check_publications_origin(wrconn, sub->publications, false,
-									  retain_dead_tuples, origin, NULL, 0,
-									  sub->name);
+			check_publications_origin_tables(wrconn, sub->publications, false,
+											 retain_dead_tuples, origin, NULL, 0,
+											 sub->name);
 
 			if (update_failover || update_two_phase)
 				walrcv_alter_slot(wrconn, sub->slotname,
@@ -2008,7 +2157,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	 * the apply and tablesync workers and they can't restart because of
 	 * exclusive lock on the subscription.
 	 */
-	rstates = GetSubscriptionRelations(subid, true);
+	rstates = GetSubscriptionRelations(subid, true, false, true);
 	foreach(lc, rstates)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
@@ -2341,10 +2490,10 @@ AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId)
  *    - See comments atop worker.c for more details.
  */
 static void
-check_publications_origin(WalReceiverConn *wrconn, List *publications,
-						  bool copydata, bool retain_dead_tuples,
-						  char *origin, Oid *subrel_local_oids,
-						  int subrel_count, char *subname)
+check_publications_origin_tables(WalReceiverConn *wrconn, List *publications,
+								 bool copydata, bool retain_dead_tuples,
+								 char *origin, Oid *subrel_local_oids,
+								 int subrel_count, char *subname)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
@@ -2421,7 +2570,7 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 				 errmsg("could not receive list of replicated tables from the publisher: %s",
 						res->err)));
 
-	/* Process tables. */
+	/* Process publications. */
 	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
 	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
 	{
@@ -2482,6 +2631,114 @@ check_publications_origin(WalReceiverConn *wrconn, List *publications,
 	walrcv_clear_result(res);
 }
 
+/*
+ * This function is similar to check_publications_origin_tables and serves
+ * same purpose for sequences.
+ */
+static void
+check_publications_origin_sequences(WalReceiverConn *wrconn, List *publications,
+									bool copydata, char *origin,
+									Oid *subrel_local_oids, int subrel_count,
+									char *subname)
+{
+	WalRcvExecResult *res;
+	StringInfoData cmd;
+	TupleTableSlot *slot;
+	Oid			tableRow[1] = {TEXTOID};
+	List	   *publist = NIL;
+
+	/*
+	 * Enable sequence synchronization checks only when origin is 'none' , to
+	 * ensure that sequence data from other origins is not inadvertently
+	 * copied.
+	 */
+	if (!copydata || pg_strcasecmp(origin, LOGICALREP_ORIGIN_NONE) != 0)
+		return;
+
+	initStringInfo(&cmd);
+	appendStringInfoString(&cmd,
+						   "SELECT DISTINCT P.pubname AS pubname\n"
+						   "FROM pg_publication P,\n"
+						   "     LATERAL pg_get_publication_sequences(P.pubname) GPS\n"
+						   "     JOIN pg_subscription_rel PS ON (GPS.relid = PS.srrelid),\n"
+						   "     pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)\n"
+						   "WHERE C.oid = GPS.relid AND P.pubname IN (");
+
+	GetPublicationsStr(publications, &cmd, true);
+	appendStringInfoString(&cmd, ")\n");
+
+	/*
+	 * In case of ALTER SUBSCRIPTION ... REFRESH PUBLICATION,
+	 * subrel_local_oids contains the list of relations that are already
+	 * present on the subscriber. This check should be skipped as these will
+	 * not be re-synced.
+	 */
+	for (int i = 0; i < subrel_count; i++)
+	{
+		Oid			relid = subrel_local_oids[i];
+		char	   *schemaname = get_namespace_name(get_rel_namespace(relid));
+		char	   *seqname = get_rel_name(relid);
+
+		appendStringInfo(&cmd,
+						 "AND NOT (N.nspname = '%s' AND C.relname = '%s')\n",
+						 schemaname, seqname);
+	}
+
+	res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
+	pfree(cmd.data);
+
+	if (res->status != WALRCV_OK_TUPLES)
+		ereport(ERROR,
+				(errcode(ERRCODE_CONNECTION_FAILURE),
+				 errmsg("could not receive list of replicated sequences from the publisher: %s",
+						res->err)));
+
+	/* Process publications. */
+	slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+	while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+	{
+		char	   *pubname;
+		bool		isnull;
+
+		pubname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+		Assert(!isnull);
+
+		ExecClearTuple(slot);
+		publist = list_append_unique(publist, makeString(pubname));
+	}
+
+	/*
+	 * Log a warning if the publisher has subscribed to the same sequence from
+	 * some other publisher. We cannot know the origin of sequences data
+	 * during the initial sync.
+	 */
+	if (publist)
+	{
+		StringInfo	pubnames = makeStringInfo();
+		StringInfo	err_msg = makeStringInfo();
+		StringInfo	err_hint = makeStringInfo();
+
+		/* Prepare the list of publication(s) for warning message. */
+		GetPublicationsStr(publist, pubnames, false);
+
+		appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
+						 subname);
+		appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher sequences did not come from other origins."));
+
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_internal("%s", err_msg->data),
+				errdetail_plural("The subscription subscribes to a publication (%s) that contains sequences that are written to by other subscriptions.",
+								 "The subscription subscribes to publications (%s) that contain sequences that are written to by other subscriptions.",
+								 list_length(publist), pubnames->data),
+				errhint_internal("%s", err_hint->data));
+	}
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	walrcv_clear_result(res);
+}
+
 /*
  * Determine whether the retain_dead_tuples can be enabled based on the
  * publisher's status.
@@ -2594,8 +2851,23 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
 }
 
 /*
- * Get the list of tables which belong to specified publications on the
- * publisher connection.
+ * Return true iff 'rv' is a member of the list.
+ */
+static bool
+list_member_rangevar(const List *list, RangeVar *rv)
+{
+	foreach_ptr(PublicationRelKind, relinfo, list)
+	{
+		if (equal(relinfo->rv, rv))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Get the list of tables and sequences which belong to specified publications
+ * on the publisher connection.
  *
  * Note that we don't support the case where the column list is different for
  * the same table in different publications to avoid sending unwanted column
@@ -2603,15 +2875,16 @@ CheckSubDeadTupleRetention(bool check_guc, bool sub_disabled,
  * list and row filter are specified for different publications.
  */
 static List *
-fetch_table_list(WalReceiverConn *wrconn, List *publications)
+fetch_relation_list(WalReceiverConn *wrconn, List *publications)
 {
 	WalRcvExecResult *res;
 	StringInfoData cmd;
 	TupleTableSlot *slot;
-	Oid			tableRow[3] = {TEXTOID, TEXTOID, InvalidOid};
-	List	   *tablelist = NIL;
+	Oid			tableRow[4] = {TEXTOID, TEXTOID, CHAROID, InvalidOid};
+	List	   *relationlist = NIL;
 	int			server_version = walrcv_server_version(wrconn);
 	bool		check_columnlist = (server_version >= 150000);
+	int			column_count = check_columnlist ? 4 : 3;
 	StringInfo	pub_names = makeStringInfo();
 
 	initStringInfo(&cmd);
@@ -2619,10 +2892,10 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 	/* Build the pub_names comma-separated string. */
 	GetPublicationsStr(publications, pub_names, true);
 
-	/* Get the list of tables from the publisher. */
+	/* Get the list of relations from the publisher */
 	if (server_version >= 160000)
 	{
-		tableRow[2] = INT2VECTOROID;
+		tableRow[3] = INT2VECTOROID;
 
 		/*
 		 * From version 16, we allowed passing multiple publications to the
@@ -2637,19 +2910,28 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		 * to worry if different publications have specified them in a
 		 * different order. See pub_collist_validate.
 		 */
-		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, gpt.attrs\n"
-						 "       FROM pg_class c\n"
+		appendStringInfo(&cmd, "SELECT DISTINCT n.nspname, c.relname, c.relkind, gpt.attrs\n"
+						 "   FROM pg_class c\n"
 						 "         JOIN pg_namespace n ON n.oid = c.relnamespace\n"
 						 "         JOIN ( SELECT (pg_get_publication_tables(VARIADIC array_agg(pubname::text))).*\n"
 						 "                FROM pg_publication\n"
 						 "                WHERE pubname IN ( %s )) AS gpt\n"
 						 "             ON gpt.relid = c.oid\n",
 						 pub_names->data);
+
+		/* From version 19, inclusion of sequences in the target is supported */
+		if (server_version >= 190000)
+			appendStringInfo(&cmd,
+							 "UNION ALL\n"
+							 "  SELECT DISTINCT s.schemaname, s.sequencename, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind, NULL::int2vector AS attrs\n"
+							 "  FROM pg_catalog.pg_publication_sequences s\n"
+							 "  WHERE s.pubname IN ( %s )",
+							 pub_names->data);
 	}
 	else
 	{
-		tableRow[2] = NAMEARRAYOID;
-		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename \n");
+		tableRow[3] = NAMEARRAYOID;
+		appendStringInfoString(&cmd, "SELECT DISTINCT t.schemaname, t.tablename, " CppAsString2(RELKIND_RELATION) "::\"char\" AS relkind \n");
 
 		/* Get column lists for each relation if the publisher supports it */
 		if (check_columnlist)
@@ -2662,7 +2944,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	destroyStringInfo(pub_names);
 
-	res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);
+	res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
 	pfree(cmd.data);
 
 	if (res->status != WALRCV_OK_TUPLES)
@@ -2678,22 +2960,28 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 		char	   *nspname;
 		char	   *relname;
 		bool		isnull;
-		RangeVar   *rv;
+		char		relkind;
+		PublicationRelKind *relinfo = palloc_object(PublicationRelKind);
 
 		nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
 		Assert(!isnull);
 		relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
 		Assert(!isnull);
+		relkind = DatumGetChar(slot_getattr(slot, 3, &isnull));
+		Assert(!isnull);
 
-		rv = makeRangeVar(nspname, relname, -1);
+		relinfo->rv = makeRangeVar(nspname, relname, -1);
+		relinfo->relkind = relkind;
 
-		if (check_columnlist && list_member(tablelist, rv))
+		if (relkind != RELKIND_SEQUENCE &&
+			check_columnlist &&
+			list_member_rangevar(relationlist, relinfo->rv))
 			ereport(ERROR,
 					errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 					errmsg("cannot use different column lists for table \"%s.%s\" in different publications",
 						   nspname, relname));
 		else
-			tablelist = lappend(tablelist, rv);
+			relationlist = lappend(relationlist, relinfo);
 
 		ExecClearTuple(slot);
 	}
@@ -2701,7 +2989,7 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
 
 	walrcv_clear_result(res);
 
-	return tablelist;
+	return relationlist;
 }
 
 /*
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index b409d4ecbf5..def32774c90 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -1112,18 +1112,36 @@ CheckCmdReplicaIdentity(Relation rel, CmdType cmd)
 
 
 /*
- * Check if we support writing into specific relkind.
+ * Check if we support writing into specific relkind of local relation and check
+ * if it aligns with the relkind of the relation on the publisher.
  *
  * The nspname and relname are only needed for error reporting.
  */
 void
-CheckSubscriptionRelkind(char relkind, const char *nspname,
-						 const char *relname)
+CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+						 const char *nspname, const char *relname)
 {
-	if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+	if (localrelkind != RELKIND_RELATION &&
+		localrelkind != RELKIND_PARTITIONED_TABLE &&
+		localrelkind != RELKIND_SEQUENCE)
 		ereport(ERROR,
 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 				 errmsg("cannot use relation \"%s.%s\" as logical replication target",
 						nspname, relname),
-				 errdetail_relkind_not_supported(relkind)));
+				 errdetail_relkind_not_supported(localrelkind)));
+
+	/*
+	 * Allow RELKIND_RELATION and RELKIND_PARTITIONED_TABLE to be treated
+	 * interchangeably, but ensure that sequences (RELKIND_SEQUENCE) match
+	 * exactly on both publisher and subscriber.
+	 */
+	if ((localrelkind == RELKIND_SEQUENCE && remoterelkind != RELKIND_SEQUENCE) ||
+		(localrelkind != RELKIND_SEQUENCE && remoterelkind == RELKIND_SEQUENCE))
+		ereport(ERROR,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+		/* translator: 3rd and 4th %s are "sequence" or "table" */
+				errmsg("relation \"%s.%s\" type mismatch: source \"%s\", target \"%s\"",
+					   nspname, relname,
+					   remoterelkind == RELKIND_SEQUENCE ? "sequence" : "table",
+					   localrelkind == RELKIND_SEQUENCE ? "sequence" : "table"));
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dc0c2886674..a4b29c822e8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10992,6 +10992,15 @@ AlterSubscriptionStmt:
 					n->options = $6;
 					$$ = (Node *) n;
 				}
+			| ALTER SUBSCRIPTION name REFRESH SEQUENCES
+				{
+					AlterSubscriptionStmt *n =
+						makeNode(AlterSubscriptionStmt);
+
+					n->kind = ALTER_SUBSCRIPTION_REFRESH_SEQUENCES;
+					n->subname = $3;
+					$$ = (Node *) n;
+				}
 			| ALTER SUBSCRIPTION name ADD_P PUBLICATION name_list opt_definition
 				{
 					AlterSubscriptionStmt *n =
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 2436a263dc2..ed62888764c 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -708,6 +708,9 @@ logicalrep_read_rel(StringInfo in)
 	/* Read the replica identity. */
 	rel->replident = pq_getmsgbyte(in);
 
+	/* relkind is not sent */
+	rel->relkind = 0;
+
 	/* Get attribute description */
 	logicalrep_read_attrs(in, rel);
 
diff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c
index f59046ad620..745fd3bab64 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -196,6 +196,17 @@ logicalrep_relmap_update(LogicalRepRelation *remoterel)
 		entry->remoterel.atttyps[i] = remoterel->atttyps[i];
 	}
 	entry->remoterel.replident = remoterel->replident;
+
+	/*
+	 * XXX The walsender currently does not transmit the relkind of the remote
+	 * relation when replicating changes. Since we support replicating only
+	 * table changes at present, we default to initializing relkind as
+	 * RELKIND_RELATION. This is needed in CheckSubscriptionRelkind() to check
+	 * if the publisher and subscriber relation kinds are compatible.
+	 */
+	entry->remoterel.relkind =
+		(remoterel->relkind == 0) ? RELKIND_RELATION : remoterel->relkind;
+
 	entry->remoterel.attkeys = bms_copy(remoterel->attkeys);
 	MemoryContextSwitchTo(oldctx);
 }
@@ -425,6 +436,7 @@ logicalrep_rel_open(LogicalRepRelId remoteid, LOCKMODE lockmode)
 
 		/* Check for supported relkind. */
 		CheckSubscriptionRelkind(entry->localrel->rd_rel->relkind,
+								 remoterel->relkind,
 								 remoterel->nspname, remoterel->relname);
 
 		/*
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index 1bb3ca01db0..e452a1e78d4 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -149,8 +149,9 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables and sequences that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true);
+		/* Fetch tables that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 2ba12517e93..40e1ed3c20e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -840,7 +840,7 @@ fetch_remote_table_info(char *nspname, char *relname, LogicalRepRelation *lrel,
 		/*
 		 * We don't support the case where the column list is different for
 		 * the same table when combining publications. See comments atop
-		 * fetch_table_list. So there should be only one row returned.
+		 * fetch_relation_list. So there should be only one row returned.
 		 * Although we already checked this when creating the subscription, we
 		 * still need to check here in case the column list was changed after
 		 * creating the subscription and before the sync worker is started.
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index ec65a385f2d..5df5a4612b6 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -3368,6 +3368,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 	 * at CREATE/ALTER SUBSCRIPTION would be insufficient.
 	 */
 	CheckSubscriptionRelkind(partrel->rd_rel->relkind,
+							 relmapentry->remoterel.relkind,
 							 get_namespace_name(RelationGetNamespace(partrel)),
 							 RelationGetRelationName(partrel));
 
@@ -3564,6 +3565,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,
 
 					/* Check that new partition also has supported relkind. */
 					CheckSubscriptionRelkind(partrel_new->rd_rel->relkind,
+											 relmapentry->remoterel.relkind,
 											 get_namespace_name(RelationGetNamespace(partrel_new)),
 											 RelationGetRelationName(partrel_new));
 
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 378b61fbd18..942e1abdb58 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1137,9 +1137,9 @@ pgoutput_column_list_init(PGOutputData *data, List *publications,
 	 *
 	 * Note that we don't support the case where the column list is different
 	 * for the same table when combining publications. See comments atop
-	 * fetch_table_list. But one can later change the publication so we still
-	 * need to check all the given publication-table mappings and report an
-	 * error if any publications have a different column list.
+	 * fetch_relation_list. But one can later change the publication so we
+	 * still need to check all the given publication-table mappings and report
+	 * an error if any publications have a different column list.
 	 */
 	foreach(lc, publications)
 	{
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index ad37f9f6ed0..fa08059671b 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -2319,11 +2319,11 @@ match_previous_words(int pattern_id,
 	/* ALTER SUBSCRIPTION <name> */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny))
 		COMPLETE_WITH("CONNECTION", "ENABLE", "DISABLE", "OWNER TO",
-					  "RENAME TO", "REFRESH PUBLICATION", "SET", "SKIP (",
-					  "ADD PUBLICATION", "DROP PUBLICATION");
-	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */
-	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION"))
-		COMPLETE_WITH("WITH (");
+					  "RENAME TO", "REFRESH PUBLICATION", "REFRESH SEQUENCES",
+					  "SET", "SKIP (", "ADD PUBLICATION", "DROP PUBLICATION");
+	/* ALTER SUBSCRIPTION <name> REFRESH */
+	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH"))
+		COMPLETE_WITH("PUBLICATION", "SEQUENCES");
 	/* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION WITH ( */
 	else if (Matches("ALTER", "SUBSCRIPTION", MatchAny, MatchAnyN, "REFRESH", "PUBLICATION", "WITH", "("))
 		COMPLETE_WITH("copy_data");
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 61b63c6bb7a..9f88498ecd3 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -90,7 +90,8 @@ extern char GetSubscriptionRelState(Oid subid, Oid relid, XLogRecPtr *sublsn);
 extern void RemoveSubscriptionRel(Oid subid, Oid relid);
 
 extern bool HasSubscriptionTables(Oid subid);
-extern List *GetSubscriptionRelations(Oid subid, bool not_ready);
+extern List *GetSubscriptionRelations(Oid subid, bool tables, bool sequences,
+									  bool not_ready);
 
 extern void UpdateDeadTupleRetentionStatus(Oid subid, bool active);
 
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 3248e78cd28..0ba86c2ad72 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -784,8 +784,8 @@ extern void ExecSimpleRelationDelete(ResultRelInfo *resultRelInfo,
 									 TupleTableSlot *searchslot);
 extern void CheckCmdReplicaIdentity(Relation rel, CmdType cmd);
 
-extern void CheckSubscriptionRelkind(char relkind, const char *nspname,
-									 const char *relname);
+extern void CheckSubscriptionRelkind(char localrelkind, char remoterelkind,
+									 const char *nspname, const char *relname);
 
 /*
  * prototypes from functions in nodeModifyTable.c
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4e445fe0cd7..ecbddd12e1b 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4362,6 +4362,7 @@ typedef enum AlterSubscriptionType
 	ALTER_SUBSCRIPTION_ADD_PUBLICATION,
 	ALTER_SUBSCRIPTION_DROP_PUBLICATION,
 	ALTER_SUBSCRIPTION_REFRESH_PUBLICATION,
+	ALTER_SUBSCRIPTION_REFRESH_SEQUENCES,
 	ALTER_SUBSCRIPTION_ENABLED,
 	ALTER_SUBSCRIPTION_SKIP,
 } AlterSubscriptionType;
diff --git a/src/test/subscription/meson.build b/src/test/subscription/meson.build
index 20b4e523d93..85d10a89994 100644
--- a/src/test/subscription/meson.build
+++ b/src/test/subscription/meson.build
@@ -45,6 +45,7 @@ tests += {
       't/033_run_as_table_owner.pl',
       't/034_temporal.pl',
       't/035_conflicts.pl',
+      't/036_sequences.pl',
       't/100_bugs.pl',
     ],
   },
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
new file mode 100644
index 00000000000..557fc91c017
--- /dev/null
+++ b/src/test/subscription/t/036_sequences.pl
@@ -0,0 +1,55 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+# This tests that sequences are registered to be synced to the subscriber
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+
+# Avoid checkpoint during the test, otherwise, extra values will be fetched for
+# the sequences which will cause the test to fail randomly.
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Initialize subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init;
+$node_subscriber->start;
+
+# Setup structure on the publisher
+my $ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+);
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Setup the same structure on the subscriber
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Insert initial test data
+$node_publisher->safe_psql(
+	'postgres', qq(
+	-- generate a number of values using the sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Setup logical replication pub/sub
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_seq_pub FOR ALL SEQUENCES");
+$node_subscriber->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
+);
+
+# Confirm sequences can be listed in pg_subscription_rel
+my $result = $node_subscriber->safe_psql('postgres',
+	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+);
+is($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 377a7946585..bdf76d0324f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2363,6 +2363,7 @@ PublicationObjSpec
 PublicationObjSpecType
 PublicationPartOpt
 PublicationRelInfo
+PublicationRelKind
 PublicationSchemaInfo
 PublicationTable
 PublishGencolsType
-- 
2.43.0

v20251023-0002-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20251023-0002-New-worker-for-sequence-synchronization-du.patchDownload
From 8869334ee17d0ae117de72f6c9114d56ea9b3368 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 22 Oct 2025 19:23:57 +0530
Subject: [PATCH v20251023 2/3] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  23 +-
 src/backend/commands/subscriptioncmds.c       |   4 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  69 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 768 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 110 ++-
 src/backend/replication/logical/tablesync.c   |  82 +-
 src/backend/replication/logical/worker.c      |  71 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   8 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  29 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/t/026_stats.pl          |  58 +-
 src/test/subscription/t/036_sequences.pl      | 192 ++++-
 src/tools/pgindent/typedefs.list              |   2 +
 25 files changed, 1351 insertions(+), 162 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 15b233a37d8..1945627ed88 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..1f3ef004aa3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.sequence_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf46a543364..067c6c68ee8 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -955,8 +954,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1057,7 +1056,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1064,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1080,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1799,7 +1798,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * The page LSN will be used in logical replication of sequences to
+		 * record the LSN of the sequence page in the pg_subscription_rel
+		 * system catalog.  It reflects the LSN of the remote sequence at the
+		 * time it was synchronized.
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index a0974d71de1..0b18f2feae4 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1082,7 +1082,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				sub_remove_rels = lappend(sub_remove_rels, remove_rel);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 				/*
 				 * For READY state, we would have already dropped the
@@ -2134,7 +2134,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..a38a89509d0 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,23 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
+ * subscription id, relid and type.
  *
- * We are only interested in the leader apply worker or table sync worker.
+ * For apply workers and sequence sync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables and sequences. For table
+ * sync workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +272,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -329,6 +333,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -417,7 +422,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -506,8 +512,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -628,15 +642,18 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
 
 /*
  * Stop the logical replication worker for subid/relid, if any.
+ *
+ * Similar to logicalrep_worker_find, relid should be set to a valid OID only
+ * for table sync workers.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -703,7 +720,10 @@ logicalrep_worker_wakeup(Oid subid, Oid relid)
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid,
+									OidIsValid(relid)
+									? WORKERTYPE_TABLESYNC
+									: WORKERTYPE_APPLY, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -835,6 +855,25 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -883,7 +922,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1260,7 +1299,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
@@ -1596,7 +1636,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1636,6 +1676,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..4a1561e5062
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,768 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state INIT, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) It avoids overloading the launcher, which handles various other
+ *    subscription requests.
+ * b) It offers a more straightforward path for extending support for
+ *    incremental sequence synchronization.
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(&has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Report discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs,
+					   StringInfo missing_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s)",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s)",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s)",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+	}
+
+	if (missing_seqs->len)
+	{
+		if (insuffperm_seqs->len || mismatched_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; missing sequence(s) on publisher: (%s)",
+							 missing_seqs->data);
+			appendStringInfoString(combined_error_hint, " For missing sequences, remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s)",
+							 missing_seqs->data);
+			appendStringInfoString(combined_error_hint, "For missing sequences, remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s.", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+
+/*
+ * Copy existing data of sequence from the publisher.
+ *
+ * Fetch the sequence value from the publisher and set the subscriber sequence
+ * with the same value.
+ */
+static void
+copy_sequence(TupleTableSlot *slot, LogicalRepSequenceInfo *seqinfo,
+			  StringInfo mismatched_seqs, StringInfo insuffperm_seqs,
+			  int *succeeded_count, int *mismatched_count, int *skipped_count,
+			  int *insuffperm_count)
+{
+	int			col = 0;
+	bool		isnull;
+	char	   *nspname;
+	char	   *seqname;
+	int64		last_value;
+	bool		is_called;
+	XLogRecPtr	page_lsn;
+	Oid			seqtypid;
+	int64		seqstart;
+	int64		seqmin;
+	int64		seqmax;
+	int64		seqincrement;
+	bool		seqcycle;
+	HeapTuple	tup;
+	Relation	sequence_rel;
+	Form_pg_sequence seqform;
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	CHECK_FOR_INTERRUPTS();
+
+	/* Get sequence information from the fetched tuple */
+	nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqtypid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqstart = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqincrement = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmin = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqmax = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqcycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	/* Get the local sequence object */
+	sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!sequence_rel || !HeapTupleIsValid(tup))
+	{
+		(*skipped_count)++;
+		elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+			 nspname, seqname);
+		return;
+	}
+
+	/* Skip if the entry is no longer valid */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		table_close(sequence_rel, RowExclusiveLock);
+		(*skipped_count)++;
+		ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered concurrently",
+							nspname, seqname));
+		return;
+	}
+
+	seqform = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Update the sequence only if the parameters are identical */
+	if (seqform->seqtypid == seqtypid &&
+		seqform->seqmin == seqmin && seqform->seqmax == seqmax &&
+		seqform->seqcycle == seqcycle &&
+		seqform->seqstart == seqstart &&
+		seqform->seqincrement == seqincrement)
+	{
+		if (!run_as_owner)
+			SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+		/* Check for sufficient permissions */
+		aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		if (aclresult != ACLCHECK_OK)
+		{
+			append_sequence_name(insuffperm_seqs, nspname, seqname,
+								 insuffperm_count);
+			ReleaseSysCache(tup);
+			table_close(sequence_rel, RowExclusiveLock);
+			return;
+		}
+
+		SetSequence(seqinfo->localrelid, last_value, is_called);
+		(*succeeded_count)++;
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+								MySubscription->name, nspname, seqname));
+
+		UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+								   SUBREL_STATE_READY, page_lsn, false);
+	}
+	else
+		append_sequence_name(mismatched_seqs, nspname, seqname,
+							 mismatched_count);
+
+	ReleaseSysCache(tup);
+	table_close(sequence_rel, NoLock);
+}
+
+/*
+ * Copy existing data of sequences from the publisher. Caller is responsible
+ * for locking the local relation.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n"
+						 "ORDER BY s.schname, s.seqname\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			bool		isnull;
+			bool		found;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			key.nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+			Assert(!isnull);
+
+			key.seqname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+			Assert(!isnull);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			copy_sequence(slot, seqinfo, mismatched_seqs,
+						  insuffperm_seqs, &batch_succeeded_count,
+						  &batch_mismatched_count, &batch_skipped_count,
+						  &batch_insuffperm_count);
+
+			/* Remove successfully processed sequence */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname, NULL);
+	}
+
+	/* Report any permission issues, mismatches, or missing sequences */
+	if (insuffperm_seqs->len || mismatched_seqs->len || missing_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs, missing_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	HASH_SEQ_STATUS hash_seq;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHARNE,
+				CharGetDatum(SUBREL_STATE_READY));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+	{
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+
+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);
+	sequences_to_copy = NULL;
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker(WORKERTYPE_SEQUENCESYNC);
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index e452a1e78d4..6d5eaf1ccd9 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -46,8 +47,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)
 {
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,14 +65,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -85,7 +100,48 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -107,6 +163,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -116,17 +178,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready is declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
 	*started_tx = false;
 
@@ -138,6 +207,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,8 +219,8 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
@@ -159,7 +229,12 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -184,5 +259,8 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..7d2745eb136 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -335,7 +335,7 @@ ProcessSyncingTablesForSync(XLogRecPtr current_lsn)
 		 */
 		replorigin_drop_by_name(originname, true, false);
 
-		FinishSyncWorker();
+		FinishSyncWorker(WORKERTYPE_TABLESYNC);
 	}
 	else
 		SpinLockRelease(&MyLogicalRepWorker->relmutex);
@@ -379,7 +379,7 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(NULL, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -413,6 +413,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +434,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -477,8 +480,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
-
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 			if (syncworker)
 			{
 				/* Found one, update our copy of its state */
@@ -549,43 +552,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1273,7 +1252,7 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 		case SUBREL_STATE_SYNCDONE:
 		case SUBREL_STATE_READY:
 		case SUBREL_STATE_UNKNOWN:
-			FinishSyncWorker(); /* doesn't return */
+			FinishSyncWorker(WORKERTYPE_TABLESYNC); /* doesn't return */
 	}
 
 	/* Calculate the name of the tablesync slot. */
@@ -1548,7 +1527,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1593,7 +1573,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1601,7 +1581,7 @@ TablesyncWorkerMain(Datum main_arg)
 
 	run_tablesync_worker();
 
-	FinishSyncWorker();
+	FinishSyncWorker(WORKERTYPE_TABLESYNC);
 }
 
 /*
@@ -1616,10 +1596,10 @@ bool
 AllTablesyncsReady(void)
 {
 	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_tables = FetchRelationStates(NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1631,7 +1611,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1649,7 +1629,7 @@ HasSubscriptionTablesCached(void)
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_subrels = FetchRelationStates(NULL, &started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 5df5a4612b6..cf1e7f44935 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1243,7 +1248,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1365,7 +1373,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1421,7 +1432,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1487,7 +1501,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1622,7 +1639,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2464,7 +2484,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -3285,7 +3308,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (!leader)
 		{
 			ereport(ERROR,
@@ -4135,7 +4158,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5578,7 +5604,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5698,8 +5725,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5810,6 +5837,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5829,14 +5860,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5907,6 +5940,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5919,9 +5956,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..002d630d4ae 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->sequence_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(sequence_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..4da7298502e 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sequence_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* sequence_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->sequence_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index eecb43ec6f0..1890f4f4d97 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,sequence_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9f88498ecd3..b42b05e6342 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,22 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..bcea652ef61 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..2db16bd7f84 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter sequence_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index ae352f6e691..a7c6588999f 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -255,6 +258,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,12 +267,16 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -279,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
-pg_noreturn extern void FinishSyncWorker(void);
+pg_noreturn extern void FinishSyncWorker(LogicalRepWorkerType wtype);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -348,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..c7bcc922ae8 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.sequence_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sequence_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..3c0b1db0510 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and sync_error_count > 0 and sequence_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,22 +149,24 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
@@ -165,13 +180,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
@@ -198,22 +214,24 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
 # Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
 	sync_error_count > 0,
+	sequence_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
@@ -226,13 +244,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
@@ -240,13 +259,14 @@ is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
 	sync_error_count = 0,
+	sequence_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
+	qq(t|t|t|t|t|t),
 	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index 557fc91c017..3b583057eb8 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -1,7 +1,7 @@
 
 # Copyright (c) 2025, PostgreSQL Global Development Group
 
-# This tests that sequences are registered to be synced to the subscriber
+# This tests that sequences are synced correctly to the subscriber
 use strict;
 use warnings;
 use PostgreSQL::Test::Cluster;
@@ -14,6 +14,7 @@ my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
 # Avoid checkpoint during the test, otherwise, extra values will be fetched for
 # the sequences which will cause the test to fail randomly.
 $node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
 $node_publisher->start;
 
 # Initialize subscriber node
@@ -28,7 +29,15 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -46,10 +55,181 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
-my $result = $node_subscriber->safe_psql('postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH SEQUENCES should cause sync of new sequences
+# of the publisher, and changes to existing sequences should also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
 );
-is($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+# Confirm that the error for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the error for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index bdf76d0324f..c8599aeb6b7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1629,6 +1629,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251023-0003-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20251023-0003-Documentation-for-sequence-synchronization.patchDownload
From b292e2d03e09c74e7b11009276e838bb333cbb49 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 23 Oct 2025 10:45:44 +0530
Subject: [PATCH v20251023 3/3] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 239 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 300 insertions(+), 29 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a2a8b49fdb..9d54f8b26ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..548aab31960 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d5f0fb7ba7c..0b2402b6ea6 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2192,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index d55e52dcc14..313c78a6376 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

#426Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#425)
Re: Logical Replication of sequences

On Thu, Oct 23, 2025 at 11:45 AM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.

I have pushed 0001 and the following are comments on 0002.

1.
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
ss.subid,
s.subname,
ss.apply_error_count,
+ ss.sequence_sync_error_count,
ss.sync_error_count,

The new parameter name is noticeably longer than other columns. Can we
name it as ss.seq_sync_error_count. We may also want to reconsider
changing existing column sync_error_count to tbl_sync_error_count. Can
we extract this in a separate stats patch?

2.
Datum
pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)

  values[0] = Int64GetDatum(seq->last_value);
  values[1] = BoolGetDatum(seq->is_called);
+
+ /*
+ * The page LSN will be used in logical replication of sequences to
+ * record the LSN of the sequence page in the pg_subscription_rel
+ * system catalog.  It reflects the LSN of the remote sequence at the
+ * time it was synchronized.
+ */
  values[2] = LSNGetDatum(PageGetLSN(page));

This comment appears out of place. We should mention it somewhere in
sequencesync worker and give reference of that place/function here.

3.
LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+    bool only_running)
…
…
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)

Let's extract changes for these APIs and their callers in a separate
patch that can be committed prior to the main patch.

4.
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+ LogicalRepWorker *worker;
+
+ LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+ worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+ WORKERTYPE_APPLY, true);

Shouldn't WORKERTYPE_SEQUENCESYNC be used? If not, then better to add
a comment on why a different type of worker is used for resetting the
seqsync time.

5.
+ * XXX: An alternative design was considered where the launcher process would
…
...
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.

Only these three points are sufficient for not going with this
alternative approach. The point (e) is most important and should be
mentioned as the first point.

6.
+ /* Get the local sequence object */
+ sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+ tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+ if (!sequence_rel || !HeapTupleIsValid(tup))
+ {
+ (*skipped_count)++;
+ elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has
been dropped concurrently",
+ nspname, seqname);
+ return;

We should close the relation when the tuple is not valid and can't proceed.

7.
+ /* Skip if the entry is no longer valid */
+ if (!seqinfo->entry_valid)
+ {
+ ReleaseSysCache(tup);
+ table_close(sequence_rel, RowExclusiveLock);
+ (*skipped_count)++;
+ ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\"
because it has been altered concurrently",
+ nspname, seqname));
+ return;

Isn't it better to release cache and close relation just before
return? Tomorrow, if we need to use something from tuple/relation, it
would be easier.

8.
+LogicalRepSyncSequences(void)
{
...
+ ScanKeyInit(&skey[1],
+ Anum_pg_subscription_rel_srsubstate,
+ BTEqualStrategyNumber, F_CHARNE,
+ CharGetDatum(SUBREL_STATE_READY));

As we are using only two states (INIT and READY) for sequences, isn't
it better to use INIT state here? That should avoid sync-in-progress
tables.

9. I find the copy_sequences->copy_sequence code can be rearranged to
make it easy to follow. The point I don't like is that the boundary
between two makes it hard to follow and requires so many parameters to
be passed to copy_sequence. The one idea to improve is to move all
failure checks out of copy_sequence either directly in the caller or
as a separate function. All the values for each sequence can be
fetched in the caller and copy_sequence can be used to SetSequence and
UpdateSubscriptionRelState(). If you have any better ideas to
rearrange this part of the patch then feel free to try those out and
share the results.

--
With Regards,
Amit Kapila.

#427Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: vignesh C (#425)
RE: Logical Replication of sequences

On Thursday, October 23, 2025 2:15 PM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.
I have also addressed the other comments: a) Shveta's comments at [1]
b) Peter's comments at [2] & [3] c) Shveta's 2nd patch comments at [4] and d)
Chao's comment#12 from [5] which was pending.

Thanks for updating the patch, I have a few comments for 0002.

1.

+		hash_seq_init(&hash_seq, sequences_to_copy);
+		while ((seq_entry = hash_seq_search(&hash_seq)) != NULL)
+		{
+			pfree(seq_entry->seqname);
+			pfree(seq_entry->nspname);
+		}
+	}
+
+	hash_destroy(sequences_to_copy);

I personally feel these memory free calls are unnecessary since the sync worker
will stop soon.

2.
-FinishSyncWorker(void)
+FinishSyncWorker(LogicalRepWorkerType wtype)

Can we directly access MyLogicalRepWorker->type instead of adding
one func parameter ?

3.

"ORDER BY s.schname, s.seqname\n",

Just to confirm, is this "ORDER BY" necessary for correctness ?

4.

elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
nspname, seqname);

Shall we use ereport here ?

Best Regards,
Hou zj

#428shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#427)
Re: Logical Replication of sequences

Please find a few comments for 002:

1)
sequencesync.c compiles without these inclusions:

+#include "replication/logicallauncher.h"
+#include "replication/worker_internal.h"
+#include "utils/rls.h"

2)
SEQ_LOG_CNT_INVALID: it is not used anywhere.

3)
+## ALTER SUBSCRIPTION ... REFRESH SEQUENCES should cause sync of new sequences

Comment seems incorrect, later one is correct:
+# Check - newly published sequence is not synced

4)
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the
subscriber.

Since we check for both non-matching and missing seq in the concerned
test, we shall mention missing-seq in comment as well.

5)
For the race condition where the worker is going to access the seq
locally and meanwhile it is altered; now the worker correctly reports
this. But it reports this as a success scenario. And once the scenario
is reported as 'seq-worker finished', we do not expect it to start
running again without the user doing REFRESH. But in this case, it
runs, see logs. Also it starts immediately for once due to the same
reason that start_time is reset in the success scenario.
-------
17:35:05.618 IST [132551] LOG: logical replication apply worker for
subscription "sub1" has started
17:35:05.637 IST [132553] LOG: logical replication sequence
synchronization worker for subscription "sub1" has started
17:35:05.663 IST [132553] LOG: logical replication sequence
synchronization for subscription "sub1" - total unsynchronized: 1
17:36:11.987 IST [132553] LOG: skip synchronization of sequence
"public.myseq249" because it has been altered concurrently
17:36:19.614 IST [132553] LOG: logical replication sequence
synchronization for subscription "sub1" - batch #1 = 1 attempted, 0
succeeded, 1 skipped, 0 mismatched, 0 insufficient permission, 0
missing from publisher
17:36:20.335 IST [132553] LOG: logical replication sequence
synchronization worker for subscription "sub1" has finished
17:36:20.435 IST [132586] LOG: logical replication sequence
synchronization worker for subscription "sub1" has started
17:36:20.545 IST [132586] LOG: logical replication sequence
synchronization for subscription "sub1" - total unsynchronized: 1
-------

The behaviour looks slightly odd. Is there anything we can do about
this? Shall the skipped case be reported as ERROR due to the fact that
we leave it in state 'i' in pg_subscription_rel?

thanks
Shveta

#429vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#426)
4 attachment(s)
Re: Logical Replication of sequences

On Thu, 23 Oct 2025 at 16:47, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 23, 2025 at 11:45 AM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.

I have pushed 0001 and the following are comments on 0002.

1.
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
ss.subid,
s.subname,
ss.apply_error_count,
+ ss.sequence_sync_error_count,
ss.sync_error_count,

The new parameter name is noticeably longer than other columns. Can we
name it as ss.seq_sync_error_count. We may also want to reconsider
changing existing column sync_error_count to tbl_sync_error_count. Can
we extract this in a separate stats patch?

Modified and extracted a separate patch for tbl_sync_error_count

2.
Datum
pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,13 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)

values[0] = Int64GetDatum(seq->last_value);
values[1] = BoolGetDatum(seq->is_called);
+
+ /*
+ * The page LSN will be used in logical replication of sequences to
+ * record the LSN of the sequence page in the pg_subscription_rel
+ * system catalog.  It reflects the LSN of the remote sequence at the
+ * time it was synchronized.
+ */
values[2] = LSNGetDatum(PageGetLSN(page));

This comment appears out of place. We should mention it somewhere in
sequencesync worker and give reference of that place/function here.

Moved this comment to the copy_sequence function just above
UpdateSubscriptionRelState.

3.
LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+    bool only_running)
…
…
void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)

Let's extract changes for these APIs and their callers in a separate
patch that can be committed prior to the main patch.

Prepared a separated patch

4.
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+ LogicalRepWorker *worker;
+
+ LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+ worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+ WORKERTYPE_APPLY, true);

Shouldn't WORKERTYPE_SEQUENCESYNC be used? If not, then better to add
a comment on why a different type of worker is used for resetting the
seqsync time.

It should be apply worker here. We set the last_seqsync_start_time for
the sequence worker in the apply worker instead of the sequence sync
worker, as the sequence sync worker has finished and is about to exit.

5.
+ * XXX: An alternative design was considered where the launcher process would
…
...
+ * c) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * d) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ * e) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.

Only these three points are sufficient for not going with this
alternative approach. The point (e) is most important and should be
mentioned as the first point.

Modified

6.
+ /* Get the local sequence object */
+ sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+ tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+ if (!sequence_rel || !HeapTupleIsValid(tup))
+ {
+ (*skipped_count)++;
+ elog(LOG, "skip synchronization of sequence \"%s.%s\" because it has
been dropped concurrently",
+ nspname, seqname);
+ return;

We should close the relation when the tuple is not valid and can't proceed.

Modified slightly to check in validate_sequence and close the relation
in copy_sequence.

7.
+ /* Skip if the entry is no longer valid */
+ if (!seqinfo->entry_valid)
+ {
+ ReleaseSysCache(tup);
+ table_close(sequence_rel, RowExclusiveLock);
+ (*skipped_count)++;
+ ereport(LOG, errmsg("skip synchronization of sequence \"%s.%s\"
because it has been altered concurrently",
+ nspname, seqname));
+ return;

Isn't it better to release cache and close relation just before
return? Tomorrow, if we need to use something from tuple/relation, it
would be easier.

Because of another comment, the code is re-structured now. As per
current logic the tuple will be released in validate_sequence and the
logging based on error is done at copy_sequence. For new code
structure releasing tuple in validate_sequence is better

8.
+LogicalRepSyncSequences(void)
{
...
+ ScanKeyInit(&skey[1],
+ Anum_pg_subscription_rel_srsubstate,
+ BTEqualStrategyNumber, F_CHARNE,
+ CharGetDatum(SUBREL_STATE_READY));

As we are using only two states (INIT and READY) for sequences, isn't
it better to use INIT state here? That should avoid sync-in-progress
tables.

Modified

9. I find the copy_sequences->copy_sequence code can be rearranged to
make it easy to follow. The point I don't like is that the boundary
between two makes it hard to follow and requires so many parameters to
be passed to copy_sequence. The one idea to improve is to move all
failure checks out of copy_sequence either directly in the caller or
as a separate function. All the values for each sequence can be
fetched in the caller and copy_sequence can be used to SetSequence and
UpdateSubscriptionRelState(). If you have any better ideas to
rearrange this part of the patch then feel free to try those out and
share the results.

Modified

The attached v20251024 version patch has the changes for the same.
The comments from [1]/messages/by-id/TY4PR01MB169078C625FB8980E6F42F4F994F1A@TY4PR01MB16907.jpnprd01.prod.outlook.com have also been addressed in this version.

[1]: /messages/by-id/TY4PR01MB169078C625FB8980E6F42F4F994F1A@TY4PR01MB16907.jpnprd01.prod.outlook.com

Regards,
Vignesh

Attachments:

v20251024-0001-Rename-sync_error_count-to-tbl_sync_error_.patchtext/x-patch; charset=US-ASCII; name=v20251024-0001-Rename-sync_error_count-to-tbl_sync_error_.patchDownload
From 57b2a0948ee298206c71ece7204136f715efe6f1 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 24 Oct 2025 18:54:37 +0530
Subject: [PATCH v20251024 1/4] Rename sync_error_count to tbl_sync_error_count

The variable sync_error_count has been renamed to tbl_sync_error_count to
make its purpose explicit and to avoid confusion with upcoming changes that
will introduce a separate seq_sync_error_count for sequence synchronization
errors. This renaming helps clarify that the counter specifically tracks
table synchronization errors and ensures better distinction when sequence
sync will be supported in future patches.
---
 doc/src/sgml/monitoring.sgml                  |  2 +-
 src/backend/catalog/system_views.sql          |  2 +-
 .../utils/activity/pgstat_subscription.c      |  4 +--
 src/backend/utils/adt/pgstatfuncs.c           |  6 ++--
 src/include/catalog/pg_proc.dat               |  2 +-
 src/include/pgstat.h                          |  4 +--
 src/test/regress/expected/rules.out           |  4 +--
 src/test/subscription/t/026_stats.pl          | 34 +++++++++++--------
 8 files changed, 31 insertions(+), 27 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d5f0fb7ba7c..1afdcd703c5 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2194,7 +2194,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
 
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
-       <structfield>sync_error_count</structfield> <type>bigint</type>
+       <structfield>tbl_sync_error_count</structfield> <type>bigint</type>
       </para>
       <para>
        Number of times an error occurred during the initial table
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..77acf8f2798 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,7 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
-        ss.sync_error_count,
+        ss.tbl_sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
         ss.confl_update_exists,
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..89dc04a72a2 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -36,7 +36,7 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 	if (is_apply_error)
 		pending->apply_error_count++;
 	else
-		pending->sync_error_count++;
+		pending->tbl_sync_error_count++;
 }
 
 /*
@@ -115,7 +115,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
-	SUB_ACC(sync_error_count);
+	SUB_ACC(tbl_sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
 #undef SUB_ACC
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..f07efcc4530 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2213,7 +2213,7 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "tbl_sync_error_count",
 					   INT8OID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
 					   INT8OID, -1, 0);
@@ -2248,8 +2248,8 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
-	/* sync_error_count */
-	values[i++] = Int64GetDatum(subentry->sync_error_count);
+	/* tbl_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->tbl_sync_error_count);
 
 	/* conflict count */
 	for (int nconflict = 0; nconflict < CONFLICT_NUM_TYPES; nconflict++)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index eecb43ec6f0..754133f227e 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5706,7 +5706,7 @@
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
   proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
   proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proargnames => '{subid,subid,apply_error_count,tbl_sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..90fb622357e 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -108,7 +108,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
-	PgStat_Counter sync_error_count;
+	PgStat_Counter tbl_sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
 
@@ -416,7 +416,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
-	PgStat_Counter sync_error_count;
+	PgStat_Counter tbl_sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
 } PgStat_StatSubEntry;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..166b9f8c89f 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,7 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
-    ss.sync_error_count,
+    ss.tbl_sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
     ss.confl_update_exists,
@@ -2202,7 +2202,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, tbl_sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..e8d3ff3fe65 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -63,7 +63,7 @@ sub create_sub_pub_w_errors
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
+	SELECT tbl_sync_error_count > 0
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub_name'
 	])
@@ -140,11 +140,12 @@ my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
 	$table1_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, tablesync errors, and conflicts are > 0 and stats_reset
+# timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
-	sync_error_count > 0,
+	tbl_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
@@ -152,7 +153,7 @@ is( $node_subscriber->safe_psql(
 	WHERE subname = '$sub1_name')
 	),
 	qq(t|t|t|t|t),
-	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
+	qq(Check that apply errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
 # Reset a single subscription
@@ -160,11 +161,12 @@ $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats((SELECT subid FROM pg_stat_subscription_stats WHERE subname = '$sub1_name')))
 );
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, tablesync errors, and conflicts are 0 and stats_reset timestamp
+# is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
-	sync_error_count = 0,
+	tbl_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
@@ -172,7 +174,7 @@ is( $node_subscriber->safe_psql(
 	WHERE subname = '$sub1_name')
 	),
 	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
+	qq(Confirm that apply errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
 # Get reset timestamp
@@ -202,11 +204,12 @@ my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
 	$table2_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, tablesync errors, and conflicts are > 0 and stats_reset
+# timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
-	sync_error_count > 0,
+	tbl_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
 	stats_reset IS NULL
@@ -214,18 +217,19 @@ is( $node_subscriber->safe_psql(
 	WHERE subname = '$sub2_name')
 	),
 	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
+	qq(Confirm that apply errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
 # Reset all subscriptions
 $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats(NULL)));
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, tablesync errors, and conflicts are 0 and stats_reset timestamp
+# is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
-	sync_error_count = 0,
+	tbl_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
@@ -233,13 +237,13 @@ is( $node_subscriber->safe_psql(
 	WHERE subname = '$sub1_name')
 	),
 	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
+	qq(Confirm that apply errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
-	sync_error_count = 0,
+	tbl_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
 	stats_reset IS NOT NULL
@@ -247,7 +251,7 @@ is( $node_subscriber->safe_psql(
 	WHERE subname = '$sub2_name')
 	),
 	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
+	qq(Confirm that apply errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
 $reset_time1 = $node_subscriber->safe_psql($db,
-- 
2.43.0

v20251024-0002-Add-worker-type-argument-to-logicalrep_wor.patchtext/x-patch; charset=US-ASCII; name=v20251024-0002-Add-worker-type-argument-to-logicalrep_wor.patchDownload
From 85e350f069379c4064d727efed26ea6e713b197d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 24 Oct 2025 15:03:50 +0530
Subject: [PATCH v20251024 2/4] Add worker type argument to
 logicalrep_worker_stop and logicalrep_worker_find

Extend the logicalrep_worker_stop and logicalrep_worker_find functions to
accept a worker type argument. This change makes it possible to distinguish
between different logical replication worker types, such as table sync and
sequence sync workers. The enhancement does not alter existing behavior but
prepares the code for future patches that will introduce sequence
synchronization workers.
---
 src/backend/commands/subscriptioncmds.c     |  4 ++--
 src/backend/replication/logical/launcher.c  | 26 ++++++++++++++-------
 src/backend/replication/logical/tablesync.c |  7 +++---
 src/backend/replication/logical/worker.c    |  2 +-
 src/include/replication/worker_internal.h   |  4 +++-
 5 files changed, 28 insertions(+), 15 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index a0974d71de1..0b18f2feae4 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1082,7 +1082,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				sub_remove_rels = lappend(sub_remove_rels, remove_rel);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				logicalrep_worker_stop(sub->oid, relid, WORKERTYPE_TABLESYNC);
 
 				/*
 				 * For READY state, we would have already dropped the
@@ -2134,7 +2134,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->subid, w->relid, w->type);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..d42e7815e8c 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -246,19 +246,23 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 
 /*
  * Walks the workers array and searches for one that matches given
- * subscription id and relid.
+ * subscription id, relid and type.
  *
  * We are only interested in the leader apply worker or table sync worker.
+ * For apply workers, the relid should be set to InvalidOid, as they manage
+ * changes across all tables and sequences. For table sync workers, the relid
+ * should be set to the OID of the relation being synchronized.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +272,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -628,15 +632,18 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
 
 /*
  * Stop the logical replication worker for subid/relid, if any.
+ *
+ * Similar to logicalrep_worker_find, relid should be set to a valid OID only
+ * for table sync workers.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
 {
 	LogicalRepWorker *worker;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(subid, relid, wtype, false);
 
 	if (worker)
 	{
@@ -700,10 +707,12 @@ void
 logicalrep_worker_wakeup(Oid subid, Oid relid)
 {
 	LogicalRepWorker *worker;
+	LogicalRepWorkerType wtype =
+		OidIsValid(relid) ? WORKERTYPE_TABLESYNC : WORKERTYPE_APPLY;
 
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(subid, relid, wtype, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -1260,7 +1269,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(sub->oid, InvalidOid, WORKERTYPE_APPLY,
+									   false);
 
 			if (w != NULL)
 			{
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..8f606c8a59c 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -161,7 +161,7 @@ wait_for_table_state_change(Oid relid, char expected_state)
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
-										false);
+										WORKERTYPE_TABLESYNC, false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
 			break;
@@ -208,7 +208,7 @@ wait_for_worker_state_change(char expected_state)
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -477,7 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
 
 			if (syncworker)
 			{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 5df5a4612b6..7aaedfb1efa 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -3285,7 +3285,7 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+										InvalidOid, WORKERTYPE_APPLY, false);
 		if (!leader)
 		{
 			ereport(ERROR,
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index ae352f6e691..e97daa348d2 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -255,6 +255,7 @@ extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
 extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+												LogicalRepWorkerType wtype,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,7 +264,8 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(Oid subid, Oid relid,
+								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
 extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
-- 
2.43.0

v20251024-0003-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=UTF-8; name=v20251024-0003-New-worker-for-sequence-synchronization-du.patchDownload
From 00762412022e61146e38f647e004fa2ebe18ec4a Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 24 Oct 2025 19:30:55 +0530
Subject: [PATCH v20251024 3/4] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/catalog/system_views.sql          |   1 +
 src/backend/commands/sequence.c               |  21 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  53 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 799 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 112 ++-
 src/backend/replication/logical/tablesync.c   |  68 +-
 src/backend/replication/logical/worker.c      |  69 +-
 .../utils/activity/pgstat_subscription.c      |  27 +-
 src/backend/utils/adt/pgstatfuncs.c           |  27 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   8 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/pgstat.h                          |   6 +-
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  23 +-
 src/test/regress/expected/rules.out           |   3 +-
 src/test/subscription/t/026_stats.pl          |  80 +-
 src/test/subscription/t/036_sequences.pl      | 192 ++++-
 src/tools/pgindent/typedefs.list              |   3 +
 24 files changed, 1369 insertions(+), 156 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 15b233a37d8..1945627ed88 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 77acf8f2798..e0fe58079bc 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.seq_sync_error_count,
         ss.tbl_sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf46a543364..ff29a3dc85b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -955,8 +954,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1057,7 +1056,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1064,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1080,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1799,7 +1798,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,11 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * See the comment in copy_sequence() above
+		 * UpdateSubscriptionRelState() for details on recording the LSN.
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index d42e7815e8c..14ca7715ea7 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -248,10 +248,10 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given
  * subscription id, relid and type.
  *
- * We are only interested in the leader apply worker or table sync worker.
- * For apply workers, the relid should be set to InvalidOid, as they manage
- * changes across all tables and sequences. For table sync workers, the relid
- * should be set to the OID of the relation being synchronized.
+ * For apply workers and sequence sync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables and sequences. For table
+ * sync workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
 logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
@@ -333,6 +333,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -421,7 +422,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -510,8 +512,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -844,6 +854,30 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/*
+	 * Set the last_seqsync_start_time for the sequence worker in the apply
+	 * worker instead of the sequence sync worker, as the sequence sync worker
+	 * has finished and is about to exit.
+	 */
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -892,7 +926,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1606,7 +1640,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1646,6 +1680,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..5610ebe252d
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,799 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state INIT, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ * b) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * c) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+typedef enum CopySeqResult
+{
+	COPYSEQ_SUCCESS,
+	COPYSEQ_MISMATCH,
+	COPYSEQ_INSUFFICIENT_PERM,
+	COPYSEQ_SKIPPED
+} CopySeqResult;
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(&has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+												 InvalidOid,
+												 WORKERTYPE_SEQUENCESYNC,
+												 true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Report discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs,
+					   StringInfo missing_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s)",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s)",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s)",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+	}
+
+	if (missing_seqs->len)
+	{
+		if (insuffperm_seqs->len || mismatched_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; missing sequence(s) on publisher: (%s)",
+							 missing_seqs->data);
+			appendStringInfoString(combined_error_hint, " For missing sequences, remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s)",
+							 missing_seqs->data);
+			appendStringInfoString(combined_error_hint, "For missing sequences, remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s.", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+/*
+ * get_remote_sequence_info
+ *
+ * Extract remote sequence information from a tuple slot received from the
+ * publisher.
+ */
+static void
+get_remote_sequence_info(TupleTableSlot *slot, LogicalRepSeqHashKey *key,
+						 int64 *last_value, bool *is_called,
+						 XLogRecPtr *page_lsn, Oid *remote_typid,
+						 int64 *remote_start, int64 *remote_increment,
+						 int64 *remote_min, int64 *remote_max,
+						 bool *remote_cycle)
+{
+	bool		isnull;
+	int			col = 0;
+
+	key->nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	key->seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_typid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_start = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_increment = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_min = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_max = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_cycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+}
+
+/*
+ * Compare sequence parameters from publisher with local sequence.
+ */
+static CopySeqResult
+validate_sequence(Relation sequence_rel, LogicalRepSequenceInfo *seqinfo,
+				  Oid remote_typid, int64 remote_start,
+				  int64 remote_increment, int64 remote_min,
+				  int64 remote_max, bool remote_cycle)
+{
+	Form_pg_sequence local_seq;
+	HeapTuple	tup;
+	CopySeqResult	result = COPYSEQ_SUCCESS;
+
+	/* Sequence was concurrently dropped */
+	if (!sequence_rel)
+		return COPYSEQ_SKIPPED;
+
+	/* Sequence was concurrently dropped */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!HeapTupleIsValid(tup))
+		return COPYSEQ_SKIPPED;
+
+	/* Sequence was concurrently invalidated */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		return COPYSEQ_SKIPPED;
+	}
+
+	local_seq = (Form_pg_sequence) GETSTRUCT(tup);
+	if (local_seq->seqtypid != remote_typid ||
+		local_seq->seqstart != remote_start ||
+		local_seq->seqincrement != remote_increment ||
+		local_seq->seqmin != remote_min ||
+		local_seq->seqmax != remote_max ||
+		local_seq->seqcycle != remote_cycle)
+		result = COPYSEQ_MISMATCH;
+
+	ReleaseSysCache(tup);
+	return result;
+}
+
+/*
+ * Apply remote sequence state to local sequence and mark it as synchronized.
+ */
+static CopySeqResult
+copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
+			  bool is_called, XLogRecPtr page_lsn)
+{
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	/*
+	 * Make sure that the sequence is copied as table owner, unless the user
+	 * has opted out of that behaviour.
+	 */
+	if (!MySubscription->runasowner)
+		SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+	aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+	if (aclresult != ACLCHECK_OK)
+	{
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		return COPYSEQ_INSUFFICIENT_PERM;
+	}
+
+	SetSequence(seqinfo->localrelid, last_value, is_called);
+
+	if (!run_as_owner)
+		RestoreUserContext(&ucxt);
+
+	/*
+	 * Record the remote sequence’s LSN in pg_subscription_rel and mark the
+	 * sequence as READY. The LSN represents the WAL position of the remote
+	 * sequence at the time it was synchronized.
+	 */
+	UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+							   SUBREL_STATE_READY, page_lsn, false);
+
+	return COPYSEQ_SUCCESS;
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			int64		last_value;
+			bool		is_called;
+			XLogRecPtr	page_lsn;
+			Oid			remote_typid;
+			int64		remote_start;
+			int64		remote_increment;
+			int64		remote_min;
+			int64		remote_max;
+			bool		remote_cycle;
+			bool		found;
+			CopySeqResult result;
+			Relation	sequence_rel;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			get_remote_sequence_info(slot, &key, &last_value, &is_called,
+									 &page_lsn, &remote_typid, &remote_start,
+									 &remote_increment, &remote_min,
+									 &remote_max, &remote_cycle);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			/* Try to open sequence */
+			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+
+			result = validate_sequence(sequence_rel, seqinfo, remote_typid,
+									   remote_start, remote_increment,
+									   remote_min, remote_max, remote_cycle);
+			if (result == COPYSEQ_SUCCESS)
+				result = copy_sequence(seqinfo, last_value, is_called, page_lsn);
+
+			switch (result)
+			{
+				case COPYSEQ_MISMATCH:
+					append_sequence_name(mismatched_seqs, key.nspname,
+										 key.seqname, &batch_mismatched_count);
+					break;
+				case COPYSEQ_INSUFFICIENT_PERM:
+					append_sequence_name(insuffperm_seqs, key.nspname,
+										 key.seqname, &batch_insuffperm_count);
+					break;
+				case COPYSEQ_SKIPPED:
+					ereport(LOG,
+							errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered or dropped concurrently",
+								   key.nspname, key.seqname));
+					batch_skipped_count++;
+					break;
+				default:
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name, key.nspname,
+											key.seqname));
+					batch_succeeded_count++;
+					break;
+			}
+
+			/* Remove processed sequence from the hash table. */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+
+			if (sequence_rel)
+				table_close(sequence_rel, NoLock);
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname,
+							 NULL);
+	}
+
+	/* Report permission issues, mismatches, or missing sequences */
+	if (insuffperm_seqs->len || mismatched_seqs->len || missing_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs, missing_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(SUBREL_STATE_INIT));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker();
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index e452a1e78d4..0146783726d 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -46,8 +47,12 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker()
 {
+	LogicalRepWorkerType wtype = MyLogicalRepWorker->type;
+
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,14 +67,26 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -85,7 +102,48 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -107,6 +165,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -116,17 +180,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready is declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
 	*started_tx = false;
 
@@ -138,6 +209,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -149,8 +221,8 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
@@ -159,7 +231,12 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -184,5 +261,8 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8f606c8a59c..442e614b102 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -379,7 +379,7 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(NULL, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -413,6 +413,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -426,11 +434,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -550,43 +553,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1549,7 +1528,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
@@ -1594,7 +1574,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1617,10 +1597,10 @@ bool
 AllTablesyncsReady(void)
 {
 	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_tables = FetchRelationStates(NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1632,7 +1612,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1650,7 +1630,7 @@ HasSubscriptionTablesCached(void)
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_subrels = FetchRelationStates(NULL, &started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 7aaedfb1efa..cf1e7f44935 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1243,7 +1248,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1365,7 +1373,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1421,7 +1432,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1487,7 +1501,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1622,7 +1639,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2464,7 +2484,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4135,7 +4158,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5578,7 +5604,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5698,8 +5725,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5810,6 +5837,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5829,14 +5860,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5907,6 +5940,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5919,9 +5956,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index 89dc04a72a2..f74f221ca10 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->tbl_sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->seq_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->tbl_sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(seq_sync_error_count);
 	SUB_ACC(tbl_sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index f07efcc4530..1b64ef50b87 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "tbl_sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "seq_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "tbl_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* seq_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->seq_sync_error_count);
+
 	/* tbl_sync_error_count */
 	values[i++] = Int64GetDatum(subentry->tbl_sync_error_count);
 
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 754133f227e..32c09596fe4 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,tbl_sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,seq_sync_error_count,tbl_sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9f88498ecd3..b42b05e6342 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,22 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..bcea652ef61 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 90fb622357e..5476aa76245 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter tbl_sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter tbl_sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index e97daa348d2..f05dca1c30b 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -264,6 +267,8 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
 extern void logicalrep_worker_stop(Oid subid, Oid relid,
 								   LogicalRepWorkerType wtype);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
@@ -271,6 +276,7 @@ extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -281,11 +287,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
 pg_noreturn extern void FinishSyncWorker(void);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -350,15 +357,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 166b9f8c89f..50431921c86 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.seq_sync_error_count,
     ss.tbl_sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, tbl_sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, seq_sync_error_count, tbl_sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index e8d3ff3fe65..f3a336a8fb9 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,36 +42,47 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT tbl_sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and tbl_sync_error_count > 0 and seq_sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
@@ -77,8 +90,8 @@ sub create_sub_pub_w_errors
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,15 +149,17 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
-# Apply errors, tablesync errors, and conflicts are > 0 and stats_reset
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset
 # timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	tbl_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -152,8 +167,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Check that apply errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Check that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
 # Reset a single subscription
@@ -161,11 +176,12 @@ $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats((SELECT subid FROM pg_stat_subscription_stats WHERE subname = '$sub1_name')))
 );
 
-# Apply errors, tablesync errors, and conflicts are 0 and stats_reset timestamp
-# is not NULL.
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	tbl_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -173,8 +189,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
 # Get reset timestamp
@@ -200,15 +216,17 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
-# Apply errors, tablesync errors, and conflicts are > 0 and stats_reset
-# timestamp is NULL.
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0
+# and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	tbl_sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -216,19 +234,20 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
 # Reset all subscriptions
 $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats(NULL)));
 
-# Apply errors, tablesync errors, and conflicts are 0 and stats_reset timestamp
-# is not NULL.
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	tbl_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -236,13 +255,14 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	tbl_sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -250,8 +270,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
 $reset_time1 = $node_subscriber->safe_psql($db,
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index 557fc91c017..3b583057eb8 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -1,7 +1,7 @@
 
 # Copyright (c) 2025, PostgreSQL Global Development Group
 
-# This tests that sequences are registered to be synced to the subscriber
+# This tests that sequences are synced correctly to the subscriber
 use strict;
 use warnings;
 use PostgreSQL::Test::Cluster;
@@ -14,6 +14,7 @@ my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
 # Avoid checkpoint during the test, otherwise, extra values will be fetched for
 # the sequences which will cause the test to fail randomly.
 $node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
 $node_publisher->start;
 
 # Initialize subscriber node
@@ -28,7 +29,15 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -46,10 +55,181 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
-my $result = $node_subscriber->safe_psql('postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH SEQUENCES should cause sync of new sequences
+# of the publisher, and changes to existing sequences should also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
 );
-is($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+# Confirm that the error for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the error for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 43fe3bcd593..4de15b93a1c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -526,6 +526,7 @@ CopyMethod
 CopyMultiInsertBuffer
 CopyMultiInsertInfo
 CopyOnErrorChoice
+CopySeqResult
 CopySource
 CopyStmt
 CopyToRoutine
@@ -1629,6 +1630,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251024-0004-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20251024-0004-Documentation-for-sequence-synchronization.patchDownload
From a118eb00880070630b688e81727e6702d87527da Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 24 Oct 2025 18:43:55 +0530
Subject: [PATCH v20251024 4/4] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 239 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |  14 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 300 insertions(+), 29 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a2a8b49fdb..9d54f8b26ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..548aab31960 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 1afdcd703c5..a2141cf383e 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
@@ -2192,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>tbl_sync_error_count</structfield> <type>bigint</type>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

#430Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#428)
Re: Logical Replication of sequences

On Fri, Oct 24, 2025 at 11:43 AM shveta malik <shveta.malik@gmail.com> wrote:

5)
For the race condition where the worker is going to access the seq
locally and meanwhile it is altered; now the worker correctly reports
this. But it reports this as a success scenario. And once the scenario
is reported as 'seq-worker finished', we do not expect it to start
running again without the user doing REFRESH. But in this case, it
runs, see logs. Also it starts immediately for once due to the same
reason that start_time is reset in the success scenario.
-------
17:35:05.618 IST [132551] LOG: logical replication apply worker for
subscription "sub1" has started
17:35:05.637 IST [132553] LOG: logical replication sequence
synchronization worker for subscription "sub1" has started
17:35:05.663 IST [132553] LOG: logical replication sequence
synchronization for subscription "sub1" - total unsynchronized: 1
17:36:11.987 IST [132553] LOG: skip synchronization of sequence
"public.myseq249" because it has been altered concurrently
17:36:19.614 IST [132553] LOG: logical replication sequence
synchronization for subscription "sub1" - batch #1 = 1 attempted, 0
succeeded, 1 skipped, 0 mismatched, 0 insufficient permission, 0
missing from publisher
17:36:20.335 IST [132553] LOG: logical replication sequence
synchronization worker for subscription "sub1" has finished
17:36:20.435 IST [132586] LOG: logical replication sequence
synchronization worker for subscription "sub1" has started
17:36:20.545 IST [132586] LOG: logical replication sequence
synchronization for subscription "sub1" - total unsynchronized: 1
-------

The behaviour looks slightly odd. Is there anything we can do about
this? Shall the skipped case be reported as ERROR due to the fact that
we leave it in state 'i' in pg_subscription_rel?

The downside of reporting an ERROR as soon as we can't sync values for
one of the sequences is that the other sequences which could be synced
won't get synced. The other possibility is that we skip processing
such a sequence while copying sequences but at the end if there is any
pending sequence which is not synced, we raise an ERROR. If we do that
then we may need to give some generic ERROR because there could be
multiple such sequences. The other possibility is that we can give a
LOG message like "logical replication sequence sync worker for
subscription \"%s\" will restart because ..." and then do proc_exit(1)
without resetting restart_time. Will that help to address your
concern?

--
With Regards,
Amit Kapila.

#431Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#429)
Re: Logical Replication of sequences

On Fri, Oct 24, 2025 at 8:52 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 23 Oct 2025 at 16:47, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 23, 2025 at 11:45 AM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.

I have pushed 0001 and the following are comments on 0002.

1.
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
ss.subid,
s.subname,
ss.apply_error_count,
+ ss.sequence_sync_error_count,
ss.sync_error_count,

The new parameter name is noticeably longer than other columns. Can we
name it as ss.seq_sync_error_count. We may also want to reconsider
changing existing column sync_error_count to tbl_sync_error_count. Can
we extract this in a separate stats patch?

Modified and extracted a separate patch for tbl_sync_error_count

Hmm, I didn't want to make the stats related changes before the main
patch. I suggested to extract seq_sync_error_count from the main patch
and keep it after the main patch. Along with the seq_sync_error_count
stats, we can discuss whether to change existing parameter to
tbl_sync_error_count.

--
With Regards,
Amit Kapila.

#432Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: vignesh C (#429)
1 attachment(s)
RE: Logical Replication of sequences

On Friday, October 24, 2025 11:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 23 Oct 2025 at 16:47, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 23, 2025 at 11:45 AM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.

I have pushed 0001 and the following are comments on 0002.

The attached v20251024 version patch has the changes for the same.
The comments from [1] have also been addressed in this version.

Thanks for updating the patch.

I was reviewing 0003 and have some thoughts for simplifying the codes related to
sequence state invalidations and hash tables:

1. I'm considering whether we could lock sequences at the start and maintain
these locks until the copy process finishes, allowing us to remove
invalidation codes.

I understand that the current process is:

1. start a transaction to fetch namespace/seqname for all the sequences in
the pg_subscription_rel
2. start multiple transation and handle a batch of in each transaction

So if there are sequence is altered between step 1 and 2, then we need to
skip the renamed or dropped sequences in step 2 and invalidates the hash
entry which looks inelegant.

To improve this, my proposal is to postpone the namespace/seqname fetch logic
until the second step. Initially, we would fetch just the sequence OIDs.
Then, in step 2, we would fetch the namespace/seqname after locking the
sequence. This approach ensures that any concurrent RENAME operations between
steps are irrelevant, as we will use the latest sequence names to query the
publisher, preventing any RENAME during step 2. This logic is also consistent
with tablesync process where we lock the table first and get nspname/relname
after that.

2. We currently use a hash table to map remote sequence information to local
sequence data. I'm exploring the possibility of using a List instead. By
passing the sequence's index in the List to the query:

The idea is to pass the index of the sequence in the List to the query like:

"FROM ( VALUES %s ) AS s (schname, seqname, seqidx)"

Upon receiving the results, we can directly map remote sequences to local
ones using:: "list_nth(seqinfos, seqidx);"

Here is a patch atop of 0003 that implements above ideas. Please take a
look at this and see if it makes the code look better.

Best Regards,
Hou zj

Attachments:

vRefactoring-0001-Replace-Hash-table-with-a-List-and-elim.patchapplication/octet-stream; name=vRefactoring-0001-Replace-Hash-table-with-a-List-and-elim.patchDownload
From 2bcd151f2e639c2aa7c5d5d917750b3e4e904428 Mon Sep 17 00:00:00 2001
From: Zhijie Hou <houzj.fnst@fujitsu.com>
Date: Mon, 27 Oct 2025 10:41:38 +0800
Subject: [PATCH vRefactoring] Replace Hash table with a List and eliminate
 invalidations

---
 .../replication/logical/sequencesync.c        | 304 ++++++------------
 src/include/catalog/pg_subscription_rel.h     |   3 +-
 2 files changed, 97 insertions(+), 210 deletions(-)

diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 5610ebe252d..2de75c2e30d 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -60,11 +60,11 @@
 
 #include "postgres.h"
 
+#include "access/sequence.h"
 #include "access/table.h"
 #include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription_rel.h"
 #include "commands/sequence.h"
-#include "common/hashfn.h"
 #include "pgstat.h"
 #include "postmaster/interrupt.h"
 #include "replication/logicallauncher.h"
@@ -77,23 +77,21 @@
 #include "utils/guc.h"
 #include "utils/inval.h"
 #include "utils/lsyscache.h"
+#include "utils/memutils.h"
 #include "utils/pg_lsn.h"
 #include "utils/rls.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-#define REMOTE_SEQ_COL_COUNT 11
+#define REMOTE_SEQ_COL_COUNT 10
 
 typedef enum CopySeqResult
 {
 	COPYSEQ_SUCCESS,
 	COPYSEQ_MISMATCH,
 	COPYSEQ_INSUFFICIENT_PERM,
-	COPYSEQ_SKIPPED
 } CopySeqResult;
 
-static HTAB *sequences_to_copy = NULL;
-
 /*
  * Handle sequence synchronization cooperation from the apply worker.
  *
@@ -230,7 +228,7 @@ append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
  * publisher.
  */
 static void
-get_remote_sequence_info(TupleTableSlot *slot, LogicalRepSeqHashKey *key,
+get_remote_sequence_info(TupleTableSlot *slot, int *seqidx,
 						 int64 *last_value, bool *is_called,
 						 XLogRecPtr *page_lsn, Oid *remote_typid,
 						 int64 *remote_start, int64 *remote_increment,
@@ -240,10 +238,7 @@ get_remote_sequence_info(TupleTableSlot *slot, LogicalRepSeqHashKey *key,
 	bool		isnull;
 	int			col = 0;
 
-	key->nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
-	Assert(!isnull);
-
-	key->seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	*seqidx = DatumGetInt32(slot_getattr(slot, ++col, &isnull));
 	Assert(!isnull);
 
 	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
@@ -281,30 +276,19 @@ get_remote_sequence_info(TupleTableSlot *slot, LogicalRepSeqHashKey *key,
  * Compare sequence parameters from publisher with local sequence.
  */
 static CopySeqResult
-validate_sequence(Relation sequence_rel, LogicalRepSequenceInfo *seqinfo,
-				  Oid remote_typid, int64 remote_start,
-				  int64 remote_increment, int64 remote_min,
+validate_sequence(LogicalRepSequenceInfo *seqinfo, Oid remote_typid,
+				  int64 remote_start, int64 remote_increment, int64 remote_min,
 				  int64 remote_max, bool remote_cycle)
 {
 	Form_pg_sequence local_seq;
 	HeapTuple	tup;
 	CopySeqResult	result = COPYSEQ_SUCCESS;
 
-	/* Sequence was concurrently dropped */
-	if (!sequence_rel)
-		return COPYSEQ_SKIPPED;
-
 	/* Sequence was concurrently dropped */
 	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
 	if (!HeapTupleIsValid(tup))
-		return COPYSEQ_SKIPPED;
-
-	/* Sequence was concurrently invalidated */
-	if (!seqinfo->entry_valid)
-	{
-		ReleaseSysCache(tup);
-		return COPYSEQ_SKIPPED;
-	}
+		elog(ERROR, "cache lookup failed for sequence %u",
+			 seqinfo->localrelid);
 
 	local_seq = (Form_pg_sequence) GETSTRUCT(tup);
 	if (local_seq->seqtypid != remote_typid ||
@@ -329,6 +313,7 @@ copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
 	UserContext ucxt;
 	AclResult	aclresult;
 	bool		run_as_owner = MySubscription->runasowner;
+	Oid			seqoid = seqinfo->localrelid;
 
 	/*
 	 * Make sure that the sequence is copied as table owner, unless the user
@@ -337,7 +322,8 @@ copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
 	if (!MySubscription->runasowner)
 		SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
 
-	aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+	aclresult = pg_class_aclcheck(seqoid, GetUserId(), ACL_UPDATE);
+
 	if (aclresult != ACLCHECK_OK)
 	{
 		if (!run_as_owner)
@@ -346,7 +332,7 @@ copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
 		return COPYSEQ_INSUFFICIENT_PERM;
 	}
 
-	SetSequence(seqinfo->localrelid, last_value, is_called);
+	SetSequence(seqoid, last_value, is_called);
 
 	if (!run_as_owner)
 		RestoreUserContext(&ucxt);
@@ -356,7 +342,7 @@ copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
 	 * sequence as READY. The LSN represents the WAL position of the remote
 	 * sequence at the time it was synchronized.
 	 */
-	UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+	UpdateSubscriptionRelState(MySubscription->oid, seqoid,
 							   SUBREL_STATE_READY, page_lsn, false);
 
 	return COPYSEQ_SUCCESS;
@@ -366,65 +352,80 @@ copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
  * Copy existing data of sequences from the publisher.
  */
 static void
-copy_sequences(WalReceiverConn *conn, Oid subid)
+copy_sequences(WalReceiverConn *conn, List *seqinfos)
 {
-	int			total_seqs = hash_get_num_entries(sequences_to_copy);
 	int			current_index = 0;
 	StringInfo	mismatched_seqs = makeStringInfo();
 	StringInfo	missing_seqs = makeStringInfo();
 	StringInfo	insuffperm_seqs = makeStringInfo();
 	StringInfo	seqstr = makeStringInfo();
 	StringInfo	cmd = makeStringInfo();
-	HASH_SEQ_STATUS status;
-	LogicalRepSequenceInfo *entry;
 
 #define MAX_SEQUENCES_SYNC_PER_BATCH 100
 
 	ereport(LOG,
 			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
-				   MySubscription->name, total_seqs));
+				   MySubscription->name, list_length(seqinfos)));
 
-	while (current_index < total_seqs)
+	while (current_index < list_length(seqinfos))
 	{
-		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID,
 		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
 		int			batch_size = 0;
 		int			batch_succeeded_count = 0;
 		int			batch_mismatched_count = 0;
-		int			batch_skipped_count = 0;
 		int			batch_insuffperm_count = 0;
 
 		WalRcvExecResult *res;
 		TupleTableSlot *slot;
+		ListCell	   *lc;
 
 		StartTransactionCommand();
-		hash_seq_init(&status, sequences_to_copy);
 
-		/* Collect a batch of sequences */
-		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		for_each_from(lc, seqinfos, current_index)
 		{
-			if (entry->remote_seq_queried)
+			Relation sequence_rel;
+			MemoryContext oldctx;
+			LogicalRepSequenceInfo *seqinfo = (LogicalRepSequenceInfo *) lfirst(lc);
+
+			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+
+			/* Skip if sequence was dropped concurrently */
+			if (!sequence_rel)
+			{
+				seqinfos = foreach_delete_current(seqinfos, lc);
 				continue;
+			}
+
+			/* Save sequence info */
+			oldctx = MemoryContextSwitchTo(TopMemoryContext);
+			seqinfo->seqname = pstrdup(RelationGetRelationName(sequence_rel));
+			seqinfo->nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+			seqinfo->seqowner = sequence_rel->rd_rel->relowner;
+			MemoryContextSwitchTo(oldctx);
+
+			/*
+			 * Hold the lock until transaction end to prevent concurrent
+			 * sequence alter operation.
+			 */
+			table_close(sequence_rel, NoLock);
 
 			if (seqstr->len > 0)
 				appendStringInfoString(seqstr, ", ");
 
-			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
-			entry->remote_seq_queried = true;
+			appendStringInfo(seqstr, "(\'%s\', \'%s\', %d)",
+							 seqinfo->nspname, seqinfo->seqname,
+							 foreach_current_index(lc));
 
-			batch_size++;
-			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
-				(current_index + batch_size == total_seqs))
+			if (++batch_size == MAX_SEQUENCES_SYNC_PER_BATCH)
 				break;
 		}
 
-		hash_seq_term(&status);
-
 		appendStringInfo(cmd,
-						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "SELECT s.seqidx, ps.*, seq.seqtypid,\n"
 						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
 						 "       seq.seqmax, seq.seqcycle\n"
-						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname, seqidx)\n"
 						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
 						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
 						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
@@ -441,20 +442,18 @@ copy_sequences(WalReceiverConn *conn, Oid subid)
 		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
 		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
 		{
-			LogicalRepSequenceInfo *seqinfo;
-			LogicalRepSeqHashKey key;
 			int64		last_value;
 			bool		is_called;
 			XLogRecPtr	page_lsn;
+			int			seqidx;
 			Oid			remote_typid;
 			int64		remote_start;
 			int64		remote_increment;
 			int64		remote_min;
 			int64		remote_max;
 			bool		remote_cycle;
-			bool		found;
 			CopySeqResult result;
-			Relation	sequence_rel;
+			LogicalRepSequenceInfo *seqinfo;
 
 			CHECK_FOR_INTERRUPTS();
 
@@ -464,54 +463,42 @@ copy_sequences(WalReceiverConn *conn, Oid subid)
 				ProcessConfigFile(PGC_SIGHUP);
 			}
 
-			get_remote_sequence_info(slot, &key, &last_value, &is_called,
+			get_remote_sequence_info(slot, &seqidx, &last_value, &is_called,
 									 &page_lsn, &remote_typid, &remote_start,
 									 &remote_increment, &remote_min,
 									 &remote_max, &remote_cycle);
 
-			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
-			Assert(seqinfo);
+			seqinfo = (LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);
+			seqinfo->found_on_pub = true;
 
-			/* Try to open sequence */
-			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
-
-			result = validate_sequence(sequence_rel, seqinfo, remote_typid,
+			result = validate_sequence(seqinfo, remote_typid,
 									   remote_start, remote_increment,
 									   remote_min, remote_max, remote_cycle);
+
 			if (result == COPYSEQ_SUCCESS)
-				result = copy_sequence(seqinfo, last_value, is_called, page_lsn);
+				result = copy_sequence(seqinfo, last_value, is_called,
+									   page_lsn);
 
 			switch (result)
 			{
+				case COPYSEQ_SUCCESS:
+					batch_succeeded_count++;
+
+					elog(DEBUG1, "logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+						 MySubscription->name, seqinfo->nspname, seqinfo->seqname);
+
+					break;
 				case COPYSEQ_MISMATCH:
-					append_sequence_name(mismatched_seqs, key.nspname,
-										 key.seqname, &batch_mismatched_count);
+					append_sequence_name(mismatched_seqs, seqinfo->nspname,
+										 seqinfo->seqname,
+										 &batch_mismatched_count);
 					break;
 				case COPYSEQ_INSUFFICIENT_PERM:
-					append_sequence_name(insuffperm_seqs, key.nspname,
-										 key.seqname, &batch_insuffperm_count);
-					break;
-				case COPYSEQ_SKIPPED:
-					ereport(LOG,
-							errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered or dropped concurrently",
-								   key.nspname, key.seqname));
-					batch_skipped_count++;
-					break;
-				default:
-					ereport(DEBUG1,
-							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
-											MySubscription->name, key.nspname,
-											key.seqname));
-					batch_succeeded_count++;
+					append_sequence_name(insuffperm_seqs, seqinfo->nspname,
+										 seqinfo->seqname,
+										 &batch_insuffperm_count);
 					break;
 			}
-
-			/* Remove processed sequence from the hash table. */
-			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
-				elog(ERROR, "hash table corrupted");
-
-			if (sequence_rel)
-				table_close(sequence_rel, NoLock);
 		}
 
 		ExecDropSingleTupleTableSlot(slot);
@@ -520,35 +507,26 @@ copy_sequences(WalReceiverConn *conn, Oid subid)
 		resetStringInfo(cmd);
 
 		ereport(LOG,
-				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d insufficient permission, %d missing from publisher",
 					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
-					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
-					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+					   batch_succeeded_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count + batch_insuffperm_count)));
 
 		/* Commit this batch, and prepare for next batch */
 		CommitTransactionCommand();
 
 		/*
-		 * current_indexes is not incremented sequentially because some
+		 * current_index is not incremented sequentially because some
 		 * sequences may be missing, and the number of fetched rows may not
-		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
-		 * of the count.
+		 * match the batch size.
 		 */
 		current_index += batch_size;
 	}
 
-	/*
-	 * Any sequences remaining in the hash table were not found on the
-	 * publisher. This is because they were included in a query
-	 * (remote_seq_queried) but were not returned in the result set.
-	 */
-	hash_seq_init(&status, sequences_to_copy);
-	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
-	{
-		Assert(entry->remote_seq_queried);
-		append_sequence_name(missing_seqs, entry->nspname, entry->seqname,
-							 NULL);
-	}
+	foreach_ptr(LogicalRepSequenceInfo, seqinfo, seqinfos)
+		if (!seqinfo->found_on_pub)
+			append_sequence_name(missing_seqs, seqinfo->nspname,
+								 seqinfo->seqname, NULL);
 
 	/* Report permission issues, mismatches, or missing sequences */
 	if (insuffperm_seqs->len || mismatched_seqs->len || missing_seqs->len)
@@ -559,68 +537,6 @@ copy_sequences(WalReceiverConn *conn, Oid subid)
 	destroyStringInfo(insuffperm_seqs);
 }
 
-/*
- * Relcache invalidation callback
- */
-static void
-sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
-{
-	HASH_SEQ_STATUS status;
-	LogicalRepSequenceInfo *entry;
-
-	/* Quick exit if no sequence is listed yet */
-	if (hash_get_num_entries(sequences_to_copy) == 0)
-		return;
-
-	if (reloid != InvalidOid)
-	{
-		hash_seq_init(&status, sequences_to_copy);
-
-		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
-		{
-			if (entry->localrelid == reloid)
-			{
-				entry->entry_valid = false;
-				hash_seq_term(&status);
-				break;
-			}
-		}
-	}
-	else
-	{
-		/* invalidate all entries */
-		hash_seq_init(&status, sequences_to_copy);
-		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
-			entry->entry_valid = false;
-	}
-}
-
-static uint32
-LogicalRepSeqHash(const void *key, Size keysize)
-{
-	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
-	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
-	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
-
-	return h1 ^ h2;
-}
-
-static int
-LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
-{
-	int			cmp;
-	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
-	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
-
-	/* Compare by namespace name first */
-	cmp = strcmp(k1->nspname, k2->nspname);
-	if (cmp != 0)
-		return cmp;
-
-	/* If namespace names are equal, compare by sequence name */
-	return strcmp(k1->seqname, k2->seqname);
-}
-
 /*
  * Start syncing the sequences in the sequencesync worker.
  */
@@ -635,21 +551,7 @@ LogicalRepSyncSequences(void)
 	SysScanDesc scan;
 	Oid			subid = MyLogicalRepWorker->subid;
 	StringInfoData app_name;
-	HASHCTL		ctl;
-	bool		found;
-	LogicalRepSequenceInfo *seq_entry;
-
-	ctl.keysize = sizeof(LogicalRepSeqHashKey);
-	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
-	ctl.hcxt = CacheMemoryContext;
-	ctl.hash = LogicalRepSeqHash;
-	ctl.match = LogicalRepSeqMatchFunc;
-	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
-									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
-
-	/* Watch for invalidation events. */
-	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
-								  (Datum) 0);
+	List	   *seqinfos = NIL;
 
 	StartTransactionCommand();
 
@@ -667,12 +569,12 @@ LogicalRepSyncSequences(void)
 
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 2, skey);
+
 	while (HeapTupleIsValid(tup = systable_getnext(scan)))
 	{
 		Form_pg_subscription_rel subrel;
 		char		relkind;
-		Relation	sequence_rel;
-		LogicalRepSeqHashKey key;
+		LogicalRepSequenceInfo *seq;
 		MemoryContext oldctx;
 
 		CHECK_FOR_INTERRUPTS();
@@ -684,32 +586,17 @@ LogicalRepSyncSequences(void)
 		if (relkind != RELKIND_SEQUENCE)
 			continue;
 
-		/* Skip if sequence was dropped concurrently */
-		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
-		if (!sequence_rel)
-			continue;
-
-		key.seqname = RelationGetRelationName(sequence_rel);
-		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-
-		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
-		Assert(!found);
-
-		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+		/*
+		 * Worker needs to process sequences across transaction boundary, so
+		 * allocate them under long-lived context.
+		 */
+		oldctx = MemoryContextSwitchTo(TopMemoryContext);
 
-		seq_entry->seqname = pstrdup(key.seqname);
-		seq_entry->nspname = pstrdup(key.nspname);
-		seq_entry->localrelid = subrel->srrelid;
-		seq_entry->remote_seq_queried = false;
-		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
-		seq_entry->entry_valid = true;
+		seq = palloc0_object(LogicalRepSequenceInfo);
+		seq->localrelid = subrel->srrelid;
+		seqinfos = lappend(seqinfos, seq);
 
 		MemoryContextSwitchTo(oldctx);
-
-		table_close(sequence_rel, RowExclusiveLock);
 	}
 
 	/* Cleanup */
@@ -718,6 +605,9 @@ LogicalRepSyncSequences(void)
 
 	CommitTransactionCommand();
 
+	if (!seqinfos)
+		return;
+
 	/* Is the use of a password mandatory? */
 	must_use_password = MySubscription->passwordrequired &&
 		!MySubscription->ownersuperuser;
@@ -741,9 +631,7 @@ LogicalRepSyncSequences(void)
 
 	pfree(app_name.data);
 
-	/* If there are any sequences that need to be copied */
-	if (hash_get_num_entries(sequences_to_copy))
-		copy_sequences(LogRepWorkerWalRcvConn, subid);
+	copy_sequences(LogRepWorkerWalRcvConn, seqinfos);
 }
 
 /*
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index b42b05e6342..d06e28280cf 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -93,9 +93,8 @@ typedef struct LogicalRepSequenceInfo
 	char	   *seqname;
 	char	   *nspname;
 	Oid			localrelid;
-	bool		remote_seq_queried;
 	Oid			seqowner;
-	bool		entry_valid;
+	bool		found_on_pub;
 } LogicalRepSequenceInfo;
 
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
-- 
2.51.1.windows.1

#433Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#432)
Re: Logical Replication of sequences

On Mon, Oct 27, 2025 at 8:23 AM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>
wrote:

On Friday, October 24, 2025 11:22 PM vignesh C <vignesh21@gmail.com>
wrote:

On Thu, 23 Oct 2025 at 16:47, Amit Kapila <amit.kapila16@gmail.com>

wrote:

On Thu, Oct 23, 2025 at 11:45 AM vignesh C <vignesh21@gmail.com>

wrote:

The attached patch has the changes for the same.

I have pushed 0001 and the following are comments on 0002.

One question, I am not sure if this has been discussed before, So while
getting sequence information from remote we are also getting the page_lsn
of the sequence and we are storing that in pg_subscription_rel. Is it just
for the user to see and compare whether the sequence is synced to the
latest lsn or is it used for anything else as well? In our patch sert, I
don't see much usability information about this field.

--
Regards,
Dilip Kumar
Google

#434Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#433)
Re: Logical Replication of sequences

On Mon, Oct 27, 2025 at 10:04 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Oct 27, 2025 at 8:23 AM Zhijie Hou (Fujitsu) <
houzj.fnst@fujitsu.com> wrote:

On Friday, October 24, 2025 11:22 PM vignesh C <vignesh21@gmail.com>
wrote:

On Thu, 23 Oct 2025 at 16:47, Amit Kapila <amit.kapila16@gmail.com>

wrote:

On Thu, Oct 23, 2025 at 11:45 AM vignesh C <vignesh21@gmail.com>

wrote:

The attached patch has the changes for the same.

I have pushed 0001 and the following are comments on 0002.

One question, I am not sure if this has been discussed before, So while
getting sequence information from remote we are also getting the page_lsn
of the sequence and we are storing that in pg_subscription_rel. Is it just
for the user to see and compare whether the sequence is synced to the
latest lsn or is it used for anything else as well? In our patch sert, I
don't see much usability information about this field.

Overall patch LGTM, and I really like the idea of getting rid of the hash
and converting it into a list, now we don't need to restart the scan unlike
hash due to transaction boundary. However I have one more suggestion.

 /*
+ * Establish the connection to the publisher for sequence synchronization.
+ */
+ LogRepWorkerWalRcvConn =
+ walrcv_connect(MySubscription->conninfo, true, true,
+   must_use_password,
+   app_name.data, &err);
+ if (LogRepWorkerWalRcvConn == NULL)
+ ereport(ERROR,
+ errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("sequencesync worker for subscription \"%s\" could not connect to
the publisher: %s",
+   MySubscription->name, err));
+
+ pfree(app_name.data);
+
+ /* If there are any sequences that need to be copied */
+ if (hash_get_num_entries(sequences_to_copy))
+ copy_sequences(LogRepWorkerWalRcvConn, subid);

I think we should call 'walrcv_connect' only if we need to copy_sequences
right?

--
Regards,
Dilip Kumar
Google

#435vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#433)
Re: Logical Replication of sequences

On Mon, 27 Oct 2025 at 10:04, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Oct 27, 2025 at 8:23 AM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com> wrote:

On Friday, October 24, 2025 11:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 23 Oct 2025 at 16:47, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 23, 2025 at 11:45 AM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.

I have pushed 0001 and the following are comments on 0002.

One question, I am not sure if this has been discussed before, So while getting sequence information from remote we are also getting the page_lsn of the sequence and we are storing that in pg_subscription_rel. Is it just for the user to see and compare whether the sequence is synced to the latest lsn or is it used for anything else as well? In our patch sert, I don't see much usability information about this field.

This is mainly intended for the following purposes: a) To determine
whether the sequence requires resynchronization by comparing it with
the latest LSN on the publisher b. ) To maintain consistency with
table synchronization behavior. c) To inform users up to which LSN
the sequence has been synchronized.
Further details will be documented in an upcoming patch.

Regards,
Vignesh

#436shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#432)
Re: Logical Replication of sequences

On Mon, Oct 27, 2025 at 8:23 AM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

On Friday, October 24, 2025 11:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 23 Oct 2025 at 16:47, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 23, 2025 at 11:45 AM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.

I have pushed 0001 and the following are comments on 0002.

The attached v20251024 version patch has the changes for the same.
The comments from [1] have also been addressed in this version.

Thanks for updating the patch.

I was reviewing 0003 and have some thoughts for simplifying the codes related to
sequence state invalidations and hash tables:

1. I'm considering whether we could lock sequences at the start and maintain
these locks until the copy process finishes, allowing us to remove
invalidation codes.

I understand that the current process is:

1. start a transaction to fetch namespace/seqname for all the sequences in
the pg_subscription_rel
2. start multiple transation and handle a batch of in each transaction

So if there are sequence is altered between step 1 and 2, then we need to
skip the renamed or dropped sequences in step 2 and invalidates the hash
entry which looks inelegant.

To improve this, my proposal is to postpone the namespace/seqname fetch logic
until the second step. Initially, we would fetch just the sequence OIDs.
Then, in step 2, we would fetch the namespace/seqname after locking the
sequence. This approach ensures that any concurrent RENAME operations between
steps are irrelevant, as we will use the latest sequence names to query the
publisher, preventing any RENAME during step 2. This logic is also consistent
with tablesync process where we lock the table first and get nspname/relname
after that.

2. We currently use a hash table to map remote sequence information to local
sequence data. I'm exploring the possibility of using a List instead. By
passing the sequence's index in the List to the query:

The idea is to pass the index of the sequence in the List to the query like:

"FROM ( VALUES %s ) AS s (schname, seqname, seqidx)"

Upon receiving the results, we can directly map remote sequences to local
ones using:: "list_nth(seqinfos, seqidx);"

Here is a patch atop of 0003 that implements above ideas. Please take a
look at this and see if it makes the code look better.

I like the overall idea here. With this approach, since we first fetch
the relids and then retrieve the sequence names later (after taking
the exclusive lock), our remote query will always use the latest
names, whether they’re the original or altered ones, it doesn’t
matter. The sequence name itself can’t change during this step, so
we’re safe on that front. As a result, the race conditions I mentioned
in [1]/messages/by-id/CAJpy0uC-Jx2L6tOTnDQ_Zwz99X3HQDik6tG=+1a71SxZFiy12w@mail.gmail.com and [2]/messages/by-id/CAJpy0uAQ43WjvuBi9F_hOJwsa1veGCJJs0ogH1o_o9AAv0jTfg@mail.gmail.com are no longer applicable. I’ll still go through the
patch in more detail to verify and review it.

[1]: /messages/by-id/CAJpy0uC-Jx2L6tOTnDQ_Zwz99X3HQDik6tG=+1a71SxZFiy12w@mail.gmail.com
[2]: /messages/by-id/CAJpy0uAQ43WjvuBi9F_hOJwsa1veGCJJs0ogH1o_o9AAv0jTfg@mail.gmail.com

thanks
Shveta

#437shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#430)
Re: Logical Replication of sequences

On Sat, Oct 25, 2025 at 12:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Oct 24, 2025 at 11:43 AM shveta malik <shveta.malik@gmail.com> wrote:

5)
For the race condition where the worker is going to access the seq
locally and meanwhile it is altered; now the worker correctly reports
this. But it reports this as a success scenario. And once the scenario
is reported as 'seq-worker finished', we do not expect it to start
running again without the user doing REFRESH. But in this case, it
runs, see logs. Also it starts immediately for once due to the same
reason that start_time is reset in the success scenario.
-------
17:35:05.618 IST [132551] LOG: logical replication apply worker for
subscription "sub1" has started
17:35:05.637 IST [132553] LOG: logical replication sequence
synchronization worker for subscription "sub1" has started
17:35:05.663 IST [132553] LOG: logical replication sequence
synchronization for subscription "sub1" - total unsynchronized: 1
17:36:11.987 IST [132553] LOG: skip synchronization of sequence
"public.myseq249" because it has been altered concurrently
17:36:19.614 IST [132553] LOG: logical replication sequence
synchronization for subscription "sub1" - batch #1 = 1 attempted, 0
succeeded, 1 skipped, 0 mismatched, 0 insufficient permission, 0
missing from publisher
17:36:20.335 IST [132553] LOG: logical replication sequence
synchronization worker for subscription "sub1" has finished
17:36:20.435 IST [132586] LOG: logical replication sequence
synchronization worker for subscription "sub1" has started
17:36:20.545 IST [132586] LOG: logical replication sequence
synchronization for subscription "sub1" - total unsynchronized: 1
-------

The behaviour looks slightly odd. Is there anything we can do about
this? Shall the skipped case be reported as ERROR due to the fact that
we leave it in state 'i' in pg_subscription_rel?

The downside of reporting an ERROR as soon as we can't sync values for
one of the sequences is that the other sequences which could be synced
won't get synced.

Yes, I agree.

The other possibility is that we skip processing
such a sequence while copying sequences but at the end if there is any
pending sequence which is not synced, we raise an ERROR. If we do that
then we may need to give some generic ERROR because there could be
multiple such sequences. The other possibility is that we can give a
LOG message like "logical replication sequence sync worker for
subscription \"%s\" will restart because ..." and then do proc_exit(1)
without resetting restart_time. Will that help to address your
concern?

Thanks for suggesting these potential solutions. After reviewing the
new approach proposed in [1]/messages/by-id/TY4PR01MB169078D0BB792F1EE438A2EE294FCA@TY4PR01MB16907.jpnprd01.prod.outlook.com by Hou-San, I believe this race condition
is no longer applicable, so we’re safe.

[1]: /messages/by-id/TY4PR01MB169078D0BB792F1EE438A2EE294FCA@TY4PR01MB16907.jpnprd01.prod.outlook.com

thanks
Shveta

#438Chao Li
li.evan.chao@gmail.com
In reply to: vignesh C (#429)
Re: Logical Replication of sequences

On Oct 24, 2025, at 23:22, vignesh C <vignesh21@gmail.com> wrote:

Regards,
Vignesh
<v20251024-0001-Rename-sync_error_count-to-tbl_sync_error_.patch><v20251024-0002-Add-worker-type-argument-to-logicalrep_wor.patch><v20251024-0003-New-worker-for-sequence-synchronization-du.patch><v20251024-0004-Documentation-for-sequence-synchronization.patch>

The changes in 0001 are straightforward, looks good. I haven’t reviewed 0004 yet. Got a few comments for 0002 and 0003.

1 - 0002
```
  * We are only interested in the leader apply worker or table sync worker.
+ * For apply workers, the relid should be set to InvalidOid, as they manage
+ * changes across all tables and sequences. For table sync workers, the relid
+ * should be set to the OID of the relation being synchronized.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(Oid subid, Oid relid, LogicalRepWorkerType wtype,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;

Assert(LWLockHeldByMe(LogicalRepWorkerLock));
```

The comment says that “for apply workers, the relid should be set to InvalidOid”, so is it worthy adding an assert for that?

2 - 0002
```
-	/* Search for attached worker for a given subscription id. */
+	/* Search for the attached worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
```

Minor issue with the comment:

* we are not search for a specific work, so “the” should be “a”
* “attached” is confusing. In the old comment, ‘attached” tied to “a given subscription id”, but now, attach to what?

So suggested revision:

“/* Search for a logical replication worker matching the specified criteria */”

3 - 0002
```
 /*
  * Stop the logical replication worker for subid/relid, if any.
+ *
+ * Similar to logicalrep_worker_find, relid should be set to a valid OID only
+ * for table sync workers.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(Oid subid, Oid relid, LogicalRepWorkerType wtype)
```

The comment should be updated: subid/relid => subid/relid/wtype.

4 - 0002
```
@@ -477,7 +477,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);

 			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-												rstate->relid, false);
+												rstate->relid,
+												WORKERTYPE_TABLESYNC, true);
```

Why changed only_running from false to true? This commit adds a new worker type, but don’t tend to change the existing logic.

5 - 0003
```
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/*
+	 * Set the last_seqsync_start_time for the sequence worker in the apply
+	 * worker instead of the sequence sync worker, as the sequence sync worker
+	 * has finished and is about to exit.
+	 */
+	worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+									WORKERTYPE_APPLY, true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
```

Two comments for this new function:

* The function comment and in-code comment are redundant. Suggesting move the in-code comment to function comment.
* Why LW_SHARED is used? We are writing worker->last_seqsync_start_time, shouldn’t LW_EXCLUSIVE be used?

6 - 0003
```
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
```

I think here could be a race condition. Because the lock is acquired in LW_SHARED, meaning multiple caller may get the same nsyncworkers. Then it launches sync worker based on nsyncworkers, which would use inaccurate nsyncworkers, because between LWLockRelease() and launch_sync_worker(), another worker might be started.

But if that is not the case, only one caller should call ProcessSyncingSequencesForApply(), then why the lock is needed?

7 - 0003
```
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s)",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
```

“Grant permissions” is unclear. Should it be “Grant UPDATE privilege”?

8 - 0003
```
+ appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
```

“To have matching parameters as publishers” grammatically sound not good. Maybe revision to “to match the publisher’s parameters”.

9 - 0003
```
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
```

Typo: current_indexes => current_index

10 - 0003
```
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
```

The comment “this is a clean exist of sequencsync worker” is specific to “if”, so suggesting moving into “if”. And “this is a clean exis of the sequencesyc worker” is not needed, keep consistent with the comment in “else”.

11 - 0003
```
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
```

The entire function is under an “if”, so we can do “if (!…) return”, so saves a level of indent.

Best regards,

Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/

#439Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#435)
Re: Logical Replication of sequences

On Mon, Oct 27, 2025 at 12:19 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 27 Oct 2025 at 10:04, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Mon, Oct 27, 2025 at 8:23 AM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com> wrote:

On Friday, October 24, 2025 11:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 23 Oct 2025 at 16:47, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 23, 2025 at 11:45 AM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.

I have pushed 0001 and the following are comments on 0002.

One question, I am not sure if this has been discussed before, So while getting sequence information from remote we are also getting the page_lsn of the sequence and we are storing that in pg_subscription_rel. Is it just for the user to see and compare whether the sequence is synced to the latest lsn or is it used for anything else as well? In our patch sert, I don't see much usability information about this field.

This is mainly intended for the following purposes: a) To determine
whether the sequence requires resynchronization by comparing it with
the latest LSN on the publisher b. ) To maintain consistency with
table synchronization behavior. c) To inform users up to which LSN
the sequence has been synchronized.
Further details will be documented in an upcoming patch.

Can we use it to build an auto-sequence-sync feature? One can imagine
that at some threshold interval apply_worker can check if any of the
replicated sequences are out-of-sync and if so, then sync those. We
can do this before the apply_worker waits for some activity or on a
clean shutdown. That way users won't need to manually sync these
sequences before upgrade.

--
With Regards,
Amit Kapila.

#440vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#431)
4 attachment(s)
Re: Logical Replication of sequences

On Sat, 25 Oct 2025 at 12:27, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Oct 24, 2025 at 8:52 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 23 Oct 2025 at 16:47, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 23, 2025 at 11:45 AM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.

I have pushed 0001 and the following are comments on 0002.

1.
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
ss.subid,
s.subname,
ss.apply_error_count,
+ ss.sequence_sync_error_count,
ss.sync_error_count,

The new parameter name is noticeably longer than other columns. Can we
name it as ss.seq_sync_error_count. We may also want to reconsider
changing existing column sync_error_count to tbl_sync_error_count. Can
we extract this in a separate stats patch?

Modified and extracted a separate patch for tbl_sync_error_count

Hmm, I didn't want to make the stats related changes before the main
patch. I suggested to extract seq_sync_error_count from the main patch
and keep it after the main patch.

Made this change.

Along with the seq_sync_error_count
stats, we can discuss whether to change existing parameter to
tbl_sync_error_count.

I have removed the tbl_sync_error_count changes for now, I will add it
if required when we get to that patch.

This version also addresses Chao's comments on 0002 which he posted at [1]/messages/by-id/BABE39BA-6893-4D65-8432-5523960FFC2B@gmail.com.
The attached patch has the changes for the same.

[1]: /messages/by-id/BABE39BA-6893-4D65-8432-5523960FFC2B@gmail.com

Regards,
Vignesh

Attachments:

v20251027-0003-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20251027-0003-Documentation-for-sequence-synchronization.patchDownload
From 9ea5fdcedf9e89e9a518e64020b606963131b23d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:18:07 +0530
Subject: [PATCH v20251027 3/4] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 239 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 291 insertions(+), 29 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a2a8b49fdb..9d54f8b26ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..548aab31960 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,209 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An ERROR is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to continue
+    the synchronization process until all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="missing-sequences">
+   <title>Missing Sequences</title>
+   <para>
+    During sequence synchronization, if a sequence is dropped on the
+    publisher, the sequence synchronization worker will identify this and
+    remove it from sequence synchronization on the subscriber.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences at the subscriber side using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2295,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2631,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2645,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d5f0fb7ba7c..242105e2cba 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

v20251027-0004-Add-seq_sync_error_count-to-subscription-s.patchapplication/octet-stream; name=v20251027-0004-Add-seq_sync_error_count-to-subscription-s.patchDownload
From e927655f635f9a8643a5dc4fefc0d1ffd1024319 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:11:42 +0530
Subject: [PATCH v20251027 4/4] Add seq_sync_error_count to subscription
 statistics.

This commit introduces a new column seq_sync_error_count to subscription
statistics. The new field tracks the number of errors encountered during
sequence synchronization for each subscription.
---
 doc/src/sgml/monitoring.sgml                  |  9 +++
 src/backend/catalog/system_views.sql          |  1 +
 .../replication/logical/sequencesync.c        |  3 +
 src/backend/replication/logical/tablesync.c   |  3 +-
 src/backend/replication/logical/worker.c      | 17 ++--
 .../utils/activity/pgstat_subscription.c      | 27 +++++--
 src/backend/utils/adt/pgstatfuncs.c           | 27 ++++---
 src/include/catalog/pg_proc.dat               |  6 +-
 src/include/pgstat.h                          |  6 +-
 src/test/regress/expected/rules.out           |  3 +-
 src/test/subscription/t/026_stats.pl          | 80 ++++++++++++-------
 11 files changed, 122 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 242105e2cba..0b2402b6ea6 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2193,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..2b1854fd940 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.seq_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 4a26277e242..0faf76010fc 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -775,6 +775,9 @@ start_sequence_sync()
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
 			PG_RE_THROW();
 		}
 	}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8d85ba4e419..87d3e723579 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1530,7 +1530,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 13e11f75752..2dec0bff55a 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -5606,7 +5606,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, true);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5957,15 +5958,11 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	if (wtype != WORKERTYPE_SEQUENCESYNC)
-	{
-		/*
-		* Report the worker failed during either table synchronization or
-		* apply.
-		*/
-		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-										!am_tablesync_worker());
-	}
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid, wtype);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..35916772b9d 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->seq_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(seq_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..0184a7eef61 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "seq_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* seq_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->seq_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 7bd3fed1f68..b776bc02237 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,seq_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..a2b52d97b55 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..cc872414af5 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.seq_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, seq_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..23f3511f9a4 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and seq_sync_error_count > 0 and sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,14 +149,17 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset
+# timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -151,8 +167,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Check that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
 # Reset a single subscription
@@ -160,10 +176,12 @@ $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats((SELECT subid FROM pg_stat_subscription_stats WHERE subname = '$sub1_name')))
 );
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -171,8 +189,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
 # Get reset timestamp
@@ -198,14 +216,17 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0
+# and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -213,18 +234,20 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
 # Reset all subscriptions
 $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats(NULL)));
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -232,13 +255,14 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -246,8 +270,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
 $reset_time1 = $node_subscriber->safe_psql($db,
-- 
2.43.0

v20251027-0001-Add-worker-type-argument-to-logicalrep_wor.patchapplication/octet-stream; name=v20251027-0001-Add-worker-type-argument-to-logicalrep_wor.patchDownload
From 472391af89e4645d3385fa6fe2cb680a219649b7 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Fri, 24 Oct 2025 15:03:50 +0530
Subject: [PATCH v20251027 1/4] Add worker type argument to
 logicalrep_worker_stop, logicalrep_worker_wakeup and logicalrep_worker_find

Extend the logicalrep_worker_stop, logicalrep_worker_wakeup and
logicalrep_worker_find functions to accept a worker type argument.
This change makes it possible to distinguish between different logical
replication worker types, such as apply worker and , table sync workers.
The enhancement does not alter existing behavior but prepares the code
for future patches that will introduce sequence synchronization workers.
---
 src/backend/commands/subscriptioncmds.c     |  4 +--
 src/backend/replication/logical/launcher.c  | 39 ++++++++++++++-------
 src/backend/replication/logical/syncutils.c |  3 +-
 src/backend/replication/logical/tablesync.c | 11 +++---
 src/backend/replication/logical/worker.c    |  8 +++--
 src/include/replication/worker_internal.h   |  9 +++--
 6 files changed, 48 insertions(+), 26 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index a0974d71de1..1f45444b499 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1082,7 +1082,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
 
 				sub_remove_rels = lappend(sub_remove_rels, remove_rel);
 
-				logicalrep_worker_stop(sub->oid, relid);
+				logicalrep_worker_stop(WORKERTYPE_TABLESYNC, sub->oid, relid);
 
 				/*
 				 * For READY state, we would have already dropped the
@@ -2134,7 +2134,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	{
 		LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);
 
-		logicalrep_worker_stop(w->subid, w->relid);
+		logicalrep_worker_stop(w->type, w->subid, w->relid);
 	}
 	list_free(subworkers);
 
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 218cefe86e2..69f7e8f8788 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -245,20 +245,25 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
 }
 
 /*
- * Walks the workers array and searches for one that matches given
- * subscription id and relid.
+ * Walks the workers array and searches for one that matches given worker type,
+ * subscription id, and relation id.
  *
- * We are only interested in the leader apply worker or table sync worker.
+ * For apply workers, the relid should be set to InvalidOid, as they manage
+ * changes across all tables. For table sync workers, the relid should be set
+ * to the OID of the relation being synchronized.
  */
 LogicalRepWorker *
-logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
+logicalrep_worker_find(LogicalRepWorkerType wtype, Oid subid, Oid relid,
+					   bool only_running)
 {
 	int			i;
 	LogicalRepWorker *res = NULL;
 
+	/* relid must be valid only for table sync workers */
+	Assert((wtype == WORKERTYPE_TABLESYNC) == OidIsValid(relid));
 	Assert(LWLockHeldByMe(LogicalRepWorkerLock));
 
-	/* Search for attached worker for a given subscription id. */
+	/* Search for a worker matching the specified criteria. */
 	for (i = 0; i < max_logical_replication_workers; i++)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
@@ -268,7 +273,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool only_running)
 			continue;
 
 		if (w->in_use && w->subid == subid && w->relid == relid &&
-			(!only_running || w->proc))
+			w->type == wtype && (!only_running || w->proc))
 		{
 			res = w;
 			break;
@@ -627,16 +632,20 @@ logicalrep_worker_stop_internal(LogicalRepWorker *worker, int signo)
 }
 
 /*
- * Stop the logical replication worker for subid/relid, if any.
+ * Stop the logical replication worker that matches the specified worker type,
+ * subscription id, and relation id.
  */
 void
-logicalrep_worker_stop(Oid subid, Oid relid)
+logicalrep_worker_stop(LogicalRepWorkerType wtype, Oid subid, Oid relid)
 {
 	LogicalRepWorker *worker;
 
+	/* relid must be valid only for table sync workers */
+	Assert((wtype == WORKERTYPE_TABLESYNC) == OidIsValid(relid));
+
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, false);
+	worker = logicalrep_worker_find(wtype, subid, relid, false);
 
 	if (worker)
 	{
@@ -694,16 +703,19 @@ logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo)
 }
 
 /*
- * Wake up (using latch) any logical replication worker for specified sub/rel.
+ * Wake up (using latch) any logical replication worker that matches the
+ * specified worker type, subscription id, and relation id.
  */
 void
-logicalrep_worker_wakeup(Oid subid, Oid relid)
+logicalrep_worker_wakeup(LogicalRepWorkerType wtype, Oid subid, Oid relid)
 {
 	LogicalRepWorker *worker;
 
+	Assert(wtype == WORKERTYPE_APPLY && !OidIsValid(relid));
+
 	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-	worker = logicalrep_worker_find(subid, relid, true);
+	worker = logicalrep_worker_find(wtype, subid, relid, true);
 
 	if (worker)
 		logicalrep_worker_wakeup_ptr(worker);
@@ -1260,7 +1272,8 @@ ApplyLauncherMain(Datum main_arg)
 				continue;
 
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-			w = logicalrep_worker_find(sub->oid, InvalidOid, false);
+			w = logicalrep_worker_find(WORKERTYPE_APPLY, sub->oid, InvalidOid,
+									   false);
 
 			if (w != NULL)
 			{
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index e452a1e78d4..ae8c9385916 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -69,7 +69,8 @@ FinishSyncWorker(void)
 	CommitTransactionCommand();
 
 	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+	logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
+							 InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 40e1ed3c20e..58c98488d7b 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -160,7 +160,8 @@ wait_for_table_state_change(Oid relid, char expected_state)
 
 		/* Check if the sync worker is still running and bail if not. */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-		worker = logicalrep_worker_find(MyLogicalRepWorker->subid, relid,
+		worker = logicalrep_worker_find(WORKERTYPE_TABLESYNC,
+										MyLogicalRepWorker->subid, relid,
 										false);
 		LWLockRelease(LogicalRepWorkerLock);
 		if (!worker)
@@ -207,8 +208,9 @@ wait_for_worker_state_change(char expected_state)
 		 * waiting.
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-		worker = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+		worker = logicalrep_worker_find(WORKERTYPE_APPLY,
+										MyLogicalRepWorker->subid, InvalidOid,
+										false);
 		if (worker && worker->proc)
 			logicalrep_worker_wakeup_ptr(worker);
 		LWLockRelease(LogicalRepWorkerLock);
@@ -476,7 +478,8 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 			 */
 			LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
-			syncworker = logicalrep_worker_find(MyLogicalRepWorker->subid,
+			syncworker = logicalrep_worker_find(WORKERTYPE_TABLESYNC,
+												MyLogicalRepWorker->subid,
 												rstate->relid, false);
 
 			if (syncworker)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 5df5a4612b6..7edd1c9cf06 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -1817,7 +1817,8 @@ apply_handle_stream_start(StringInfo s)
 				 * Signal the leader apply worker, as it may be waiting for
 				 * us.
 				 */
-				logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+				logicalrep_worker_wakeup(WORKERTYPE_APPLY,
+										 MyLogicalRepWorker->subid, InvalidOid);
 			}
 
 			parallel_stream_nchanges = 0;
@@ -3284,8 +3285,9 @@ FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
 		 * maybe_advance_nonremovable_xid() for details).
 		 */
 		LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
-		leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
-										InvalidOid, false);
+		leader = logicalrep_worker_find(WORKERTYPE_APPLY,
+										MyLogicalRepWorker->subid, InvalidOid,
+										false);
 		if (!leader)
 		{
 			ereport(ERROR,
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index ae352f6e691..e23fa9a4514 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -254,7 +254,8 @@ extern PGDLLIMPORT bool InitializingApplyWorker;
 extern PGDLLIMPORT List *table_states_not_ready;
 
 extern void logicalrep_worker_attach(int slot);
-extern LogicalRepWorker *logicalrep_worker_find(Oid subid, Oid relid,
+extern LogicalRepWorker *logicalrep_worker_find(LogicalRepWorkerType wtype,
+												Oid subid, Oid relid,
 												bool only_running);
 extern List *logicalrep_workers_find(Oid subid, bool only_running,
 									 bool acquire_lock);
@@ -263,9 +264,11 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void logicalrep_worker_stop(Oid subid, Oid relid);
+extern void logicalrep_worker_stop(LogicalRepWorkerType wtype, Oid subid,
+								   Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
-extern void logicalrep_worker_wakeup(Oid subid, Oid relid);
+extern void logicalrep_worker_wakeup(LogicalRepWorkerType wtype, Oid subid,
+									 Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
-- 
2.43.0

v20251027-0002-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20251027-0002-New-worker-for-sequence-synchronization-du.patchDownload
From 5992e715fb368db7333531785268deb744a063c8 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 15:31:13 +0530
Subject: [PATCH v20251027 2/4] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (PG19 command syntax is unchanged)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (PG19 command syntax is unchanged)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - The patch introduces this new command to refresh all sequences
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/commands/sequence.c               |  21 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  53 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 795 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 114 ++-
 src/backend/replication/logical/tablesync.c   |  65 +-
 src/backend/replication/logical/worker.c      |  72 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   2 +-
 src/include/catalog/pg_subscription_rel.h     |  16 +
 src/include/commands/sequence.h               |   3 +
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  23 +-
 src/test/subscription/t/036_sequences.pl      | 192 ++++-
 src/tools/pgindent/typedefs.list              |   3 +
 18 files changed, 1269 insertions(+), 104 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 15b233a37d8..1945627ed88 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index cf46a543364..ff29a3dc85b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -955,8 +954,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1057,7 +1056,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1065,14 +1064,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1081,7 +1080,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1799,7 +1798,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1843,6 +1843,11 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * See the comment in copy_sequence() above
+		 * UpdateSubscriptionRelState() for details on recording the LSN.
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 69f7e8f8788..36f47917340 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -248,9 +248,10 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given worker type,
  * subscription id, and relation id.
  *
- * For apply workers, the relid should be set to InvalidOid, as they manage
- * changes across all tables. For table sync workers, the relid should be set
- * to the OID of the relation being synchronized.
+ * For apply workers and sequence sync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables. For table sync
+ * workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
 logicalrep_worker_find(LogicalRepWorkerType wtype, Oid subid, Oid relid,
@@ -334,6 +335,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -422,7 +424,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -511,8 +514,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -847,6 +858,31 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/*
+	 * Set the last_seqsync_start_time for the sequence worker in the apply
+	 * worker instead of the sequence sync worker, as the sequence sync worker
+	 * has finished and is about to exit.
+	 */
+	worker = logicalrep_worker_find(WORKERTYPE_APPLY,
+									MyLogicalRepWorker->subid, InvalidOid,
+									true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -895,7 +931,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1609,7 +1645,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1649,6 +1685,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..4a26277e242
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,795 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state INIT, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ * b) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * c) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/rls.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+typedef enum CopySeqResult
+{
+	COPYSEQ_SUCCESS,
+	COPYSEQ_MISMATCH,
+	COPYSEQ_INSUFFICIENT_PERM,
+	COPYSEQ_SKIPPED
+} CopySeqResult;
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(&has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(WORKERTYPE_SEQUENCESYNC,
+												 MyLogicalRepWorker->subid,
+												 InvalidOid, true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Report discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs,
+					   StringInfo missing_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s)",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s)",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s)",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
+		}
+	}
+
+	if (missing_seqs->len)
+	{
+		if (insuffperm_seqs->len || mismatched_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; missing sequence(s) on publisher: (%s)",
+							 missing_seqs->data);
+			appendStringInfoString(combined_error_hint, " For missing sequences, remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s)",
+							 missing_seqs->data);
+			appendStringInfoString(combined_error_hint, "For missing sequences, remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s.", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+/*
+ * get_remote_sequence_info
+ *
+ * Extract remote sequence information from a tuple slot received from the
+ * publisher.
+ */
+static void
+get_remote_sequence_info(TupleTableSlot *slot, LogicalRepSeqHashKey *key,
+						 int64 *last_value, bool *is_called,
+						 XLogRecPtr *page_lsn, Oid *remote_typid,
+						 int64 *remote_start, int64 *remote_increment,
+						 int64 *remote_min, int64 *remote_max,
+						 bool *remote_cycle)
+{
+	bool		isnull;
+	int			col = 0;
+
+	key->nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	key->seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_typid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_start = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_increment = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_min = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_max = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_cycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+}
+
+/*
+ * Compare sequence parameters from publisher with local sequence.
+ */
+static CopySeqResult
+validate_sequence(Relation sequence_rel, LogicalRepSequenceInfo *seqinfo,
+				  Oid remote_typid, int64 remote_start,
+				  int64 remote_increment, int64 remote_min,
+				  int64 remote_max, bool remote_cycle)
+{
+	Form_pg_sequence local_seq;
+	HeapTuple	tup;
+	CopySeqResult	result = COPYSEQ_SUCCESS;
+
+	/* Sequence was concurrently dropped */
+	if (!sequence_rel)
+		return COPYSEQ_SKIPPED;
+
+	/* Sequence was concurrently dropped */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!HeapTupleIsValid(tup))
+		return COPYSEQ_SKIPPED;
+
+	/* Sequence was concurrently invalidated */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		return COPYSEQ_SKIPPED;
+	}
+
+	local_seq = (Form_pg_sequence) GETSTRUCT(tup);
+	if (local_seq->seqtypid != remote_typid ||
+		local_seq->seqstart != remote_start ||
+		local_seq->seqincrement != remote_increment ||
+		local_seq->seqmin != remote_min ||
+		local_seq->seqmax != remote_max ||
+		local_seq->seqcycle != remote_cycle)
+		result = COPYSEQ_MISMATCH;
+
+	ReleaseSysCache(tup);
+	return result;
+}
+
+/*
+ * Apply remote sequence state to local sequence and mark it as synchronized.
+ */
+static CopySeqResult
+copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
+			  bool is_called, XLogRecPtr page_lsn)
+{
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	/*
+	 * Make sure that the sequence is copied as table owner, unless the user
+	 * has opted out of that behaviour.
+	 */
+	if (!MySubscription->runasowner)
+		SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+	aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+	if (aclresult != ACLCHECK_OK)
+	{
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		return COPYSEQ_INSUFFICIENT_PERM;
+	}
+
+	SetSequence(seqinfo->localrelid, last_value, is_called);
+
+	if (!run_as_owner)
+		RestoreUserContext(&ucxt);
+
+	/*
+	 * Record the remote sequence’s LSN in pg_subscription_rel and mark the
+	 * sequence as READY. The LSN represents the WAL position of the remote
+	 * sequence at the time it was synchronized.
+	 */
+	UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+							   SUBREL_STATE_READY, page_lsn, false);
+
+	return COPYSEQ_SUCCESS;
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			int64		last_value;
+			bool		is_called;
+			XLogRecPtr	page_lsn;
+			Oid			remote_typid;
+			int64		remote_start;
+			int64		remote_increment;
+			int64		remote_min;
+			int64		remote_max;
+			bool		remote_cycle;
+			bool		found;
+			CopySeqResult result;
+			Relation	sequence_rel;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			get_remote_sequence_info(slot, &key, &last_value, &is_called,
+									 &page_lsn, &remote_typid, &remote_start,
+									 &remote_increment, &remote_min,
+									 &remote_max, &remote_cycle);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			/* Try to open sequence */
+			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+
+			result = validate_sequence(sequence_rel, seqinfo, remote_typid,
+									   remote_start, remote_increment,
+									   remote_min, remote_max, remote_cycle);
+			if (result == COPYSEQ_SUCCESS)
+				result = copy_sequence(seqinfo, last_value, is_called, page_lsn);
+
+			switch (result)
+			{
+				case COPYSEQ_MISMATCH:
+					append_sequence_name(mismatched_seqs, key.nspname,
+										 key.seqname, &batch_mismatched_count);
+					break;
+				case COPYSEQ_INSUFFICIENT_PERM:
+					append_sequence_name(insuffperm_seqs, key.nspname,
+										 key.seqname, &batch_insuffperm_count);
+					break;
+				case COPYSEQ_SKIPPED:
+					ereport(LOG,
+							errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered or dropped concurrently",
+								   key.nspname, key.seqname));
+					batch_skipped_count++;
+					break;
+				default:
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name, key.nspname,
+											key.seqname));
+					batch_succeeded_count++;
+					break;
+			}
+
+			/* Remove processed sequence from the hash table. */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+
+			if (sequence_rel)
+				table_close(sequence_rel, NoLock);
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_indexes is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname,
+							 NULL);
+	}
+
+	/* Report permission issues, mismatches, or missing sequences */
+	if (insuffperm_seqs->len || mismatched_seqs->len || missing_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs, missing_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(SUBREL_STATE_INIT));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker();
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index ae8c9385916..fce0c1fbb39 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -46,8 +47,12 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker()
 {
+	LogicalRepWorkerType wtype = MyLogicalRepWorker->type;
+
+	Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,15 +67,27 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (wtype == WORKERTYPE_TABLESYNC)
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
-							 InvalidOid);
+	/*
+	 * This is a clean exit of the sequencesync worker; reset the
+	 * last_seqsync_start_time.
+	 */
+	if (wtype == WORKERTYPE_SEQUENCESYNC)
+		logicalrep_reset_seqsync_start_time();
+	else
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
+								 InvalidOid);
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -86,7 +103,48 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers < max_sync_workers_per_subscription)
+	{
+		TimestampTz now = GetCurrentTimestamp();
+
+		if (!(*last_start_time) ||
+			TimestampDifferenceExceeds(*last_start_time, now,
+									   wal_retrieve_retry_interval))
+		{
+			/*
+			 * Set the last_start_time even if we fail to start the worker, so
+			 * that we won't retry until wal_retrieve_retry_interval has
+			 * elapsed.
+			 */
+			*last_start_time = now;
+			(void) logicalrep_worker_launch((relid == InvalidOid) ? WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,
+											MyLogicalRepWorker->dbid,
+											MySubscription->oid,
+											MySubscription->name,
+											MyLogicalRepWorker->userid,
+											relid,
+											DSM_HANDLE_INVALID,
+											false);
+		}
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -108,6 +166,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -117,17 +181,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready is declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
 	*started_tx = false;
 
@@ -139,6 +210,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -150,8 +222,8 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
@@ -160,7 +232,12 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -185,5 +262,8 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 58c98488d7b..8d85ba4e419 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -381,7 +381,7 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(NULL, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -415,6 +415,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -428,11 +436,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -552,43 +555,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1596,7 +1575,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1619,10 +1598,10 @@ bool
 AllTablesyncsReady(void)
 {
 	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_tables = FetchRelationStates(NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1634,7 +1613,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1652,7 +1631,7 @@ HasSubscriptionTablesCached(void)
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_subrels = FetchRelationStates(NULL, &started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 7edd1c9cf06..13e11f75752 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1243,7 +1248,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1365,7 +1373,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1421,7 +1432,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1487,7 +1501,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1622,7 +1639,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2465,7 +2485,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4137,7 +4160,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5580,7 +5606,7 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid, true);
 
 			PG_RE_THROW();
 		}
@@ -5700,8 +5726,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5812,6 +5838,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5831,14 +5861,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5909,6 +5941,10 @@ ApplyWorkerMain(Datum main_arg)
 void
 DisableSubscriptionAndExit(void)
 {
+	LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+		(am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+		WORKERTYPE_APPLY;
+
 	/*
 	 * Emit the error message, and recover from the error state to an idle
 	 * state
@@ -5921,9 +5957,15 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	if (wtype != WORKERTYPE_SEQUENCESYNC)
+	{
+		/*
+		* Report the worker failed during either table synchronization or
+		* apply.
+		*/
+		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+										!am_tablesync_worker());
+	}
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index eecb43ec6f0..7bd3fed1f68 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9f88498ecd3..b42b05e6342 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,22 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..bcea652ef61 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -45,6 +45,8 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
 /* XLOG stuff */
 #define XLOG_SEQ_LOG			0x00
 
+#define SEQ_LOG_CNT_INVALID		0
+
 typedef struct xl_seq_rec
 {
 	RelFileLocator locator;
@@ -60,6 +62,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index e23fa9a4514..b6fc9e7a6de 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -264,6 +267,8 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
 extern void logicalrep_worker_stop(LogicalRepWorkerType wtype, Oid subid,
 								   Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
@@ -272,6 +277,7 @@ extern void logicalrep_worker_wakeup(LogicalRepWorkerType wtype, Oid subid,
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -282,11 +288,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
 pg_noreturn extern void FinishSyncWorker(void);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -351,15 +358,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index 557fc91c017..3b583057eb8 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -1,7 +1,7 @@
 
 # Copyright (c) 2025, PostgreSQL Global Development Group
 
-# This tests that sequences are registered to be synced to the subscriber
+# This tests that sequences are synced correctly to the subscriber
 use strict;
 use warnings;
 use PostgreSQL::Test::Cluster;
@@ -14,6 +14,7 @@ my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
 # Avoid checkpoint during the test, otherwise, extra values will be fetched for
 # the sequences which will cause the test to fail randomly.
 $node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
 $node_publisher->start;
 
 # Initialize subscriber node
@@ -28,7 +29,15 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -46,10 +55,181 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
-my $result = $node_subscriber->safe_psql('postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH SEQUENCES should cause sync of new sequences
+# of the publisher, and changes to existing sequences should also be synced.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should throw an error
+# for sequence definition not matching between the publisher and the subscriber.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
 );
-is($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+# Confirm that the error for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the error for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 43fe3bcd593..4de15b93a1c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -526,6 +526,7 @@ CopyMethod
 CopyMultiInsertBuffer
 CopyMultiInsertInfo
 CopyOnErrorChoice
+CopySeqResult
 CopySource
 CopyStmt
 CopyToRoutine
@@ -1629,6 +1630,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

#441Chao Li
li.evan.chao@gmail.com
In reply to: Chao Li (#438)
Re: Logical Replication of sequences

On Oct 27, 2025, at 17:11, Chao Li <li.evan.chao@gmail.com> wrote:

The changes in 0001 are straightforward, looks good. I haven’t reviewed 0004 yet.

Comments for 0004:

1 - config.sgml
```
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
```

* “a failing replication apply worker” sounds a bit redundant, maybe change to “a failed apply worker”
* “will be respawned” works, but in formal documentation, I think “is respawned” is better

2 - logic-replication.sgml
```
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
```

* “may be” better to be “can be”
* I think the first sentence can be slightly enhanced as "Unlike tables, the state of a sequence can be synchronized at any time.”
* “refer to” should be “see” in PG docs. You can see right the next paragraph just uses “see”:
```
<command>TRUNCATE</command>. See <xref linkend="logical-replication-row-filter"/>).
```

3 - logic-replication.sgml
```
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
```

“At the subscriber side” is better to be “on the subscriber”. Actually, you also use “on the subscriber” in the following paragraphs.

4 - logic-replication.sgml
```
During sequence synchronization, the sequence definitions of the publisher
and the subscriber are compared. An ERROR is logged listing all differing
sequences before the process exits. The apply worker detects this failure
and repeatedly respawns the sequence synchronization worker to continue
the synchronization process until all differences are resolved. See also
```

* “An ERROR” => “An error”. If you search for the current doc, “error” are all in lower case.
* " the sequence synchronization worker to continue the synchronization process”, the second “synchronization” sounds redundant, maybe enhance to "the sequence synchronization worker to retry"

5 - logic-replication.sgml
```
During sequence synchronization, if a sequence is dropped on the
publisher, the sequence synchronization worker will identify this and
remove it from sequence synchronization on the subscriber.
```

“Will identify this” => “detects the change”, I think PG docs usually prefer more direct phrasing.

Best regards,
--
Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/

#442Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#440)
Re: Logical Replication of sequences

Hi Vignesh,

WIP - Some comments for patch v20251027-0003

======
General.

1.
When referring to synchronization workers AFAIK the convention has always been:

- code/comments refer to "tablesync workers" and "sequencesync workers"
- but errors/docs mostly use the long form like "table synchronization
workers" and "sequence synchronization workers"

This patch still has places saying "sequence sync worker" instead of
"sequencesync worker" etc. IMO these should be changed, otherwise
there are too many variations of saying the same thing.

======
src/backend/commands/sequence.c

pg_get_sequence_data:

2.
/*
* See the comment in copy_sequence() above
* UpdateSubscriptionRelState() for details on recording the LSN.
*/

Consider rewording that more like:
For details about recording the LSN, see the
UpdateSubscriptionRelState() call in copy_sequence().

======
src/backend/replication/logical/launcher.c

logicalrep_worker_find:

3.
/sequence sync workers/sequencesync workers/
/table sync workers/tablesync workers/

~~~

logicalrep_reset_seqsync_start_time:

4.
+ /*
+ * Set the last_seqsync_start_time for the sequence worker in the apply
+ * worker instead of the sequence sync worker, as the sequence sync worker
+ * has finished and is about to exit.
+ */

Half of this comment was already described in the function comment.
Maybe better to remove this comment, and instead just add more to the
function comment.

SUGGESTION (function comment)
Reset the last_seqsync_start_time of the sequencesync worker in the
subscription's apply worker. Note -- this value is not stored in the
sequencesync worker, because that has finished already and is about to
exit.

======
src/backend/replication/logical/syncutils.c

FinishSyncWorker:

5.
+FinishSyncWorker()
 {
+ LogicalRepWorkerType wtype = MyLogicalRepWorker->type;
+
+ Assert(wtype == WORKERTYPE_TABLESYNC || wtype == WORKERTYPE_SEQUENCESYNC);
+

We already have some macros so I think it's better to make use of them

SUGGESTION

Assert(am_tablesync_worker() || am_sequencesync_worker())

~

Similarly, the subsequent code like:

+ if (wtype == WORKERTYPE_TABLESYNC)
and
+ if (wtype == WORKERTYPE_SEQUENCESYNC)

becomes
if (am_tablesync_worker()) ...
if (am_sequencesync_worker()) ...

~~~

6.
+ /*
+ * This is a clean exit of the sequencesync worker; reset the
+ * last_seqsync_start_time.
+ */
+ if (wtype == WORKERTYPE_SEQUENCESYNC)
+ logicalrep_reset_seqsync_start_time();
+ else
+ /* Find the leader apply worker and signal it. */
+ logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
+ InvalidOid);

I think those comments belong within the if blocks, so add some {}

SUGGESTION

if (is_sequencesync)
{
/* comment ... */
logicalrep_reset_seqsync_start_time(...)
}
else
{
/* comment ... */
logicalrep_worker_wakeup(...)
}

~~~

launch_sync_worker:

7.
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequence sync worker, actual relid for table sync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */

/sequence sync worker/sequencesync worker/
/table sync worker/tablesync worker/

~~~

8.
+ (void) logicalrep_worker_launch((relid == InvalidOid) ?
WORKERTYPE_SEQUENCESYNC : WORKERTYPE_TABLESYNC,

Tidier to use macro:
OidIsValid(relid) ? WORKERTYPE_TABLESYNC : WORKERTYPE_SEQUENCESYNC

OTOH, I think the caller already knows the WORKERTYPE_xxx it is
launching, perhaps it is best to pass 'wtype' as another parameter to
launch_sync_worker()?

~~~

FetchRelationStates

9.
+ /*
+ * has_subtables and has_subsequences_non_ready is declared as static,
+ * since the same value can be used until the system table is invalidated.
+ */

typo: /is declared/are declared/

======
src/backend/replication/logical/worker.c

DisableSubscriptionAndExit:

10.
+ LogicalRepWorkerType wtype = am_tablesync_worker() ? WORKERTYPE_TABLESYNC :
+ (am_sequencesync_worker()) ? WORKERTYPE_SEQUENCESYNC :
+ WORKERTYPE_APPLY;
+

Why is this code needed at all?

I think we don't need wtype, because later code that says:
+ if (wtype != WORKERTYPE_SEQUENCESYNC)

Can instead just say:
+ if (am_apply_worker() || am_tablesync_worker())

======
src/include/catalog/pg_subscription_rel.h

11.
+typedef struct LogicalRepSeqHashKey
+{
+ const char *seqname;
+ const char *nspname;
+} LogicalRepSeqHashKey;
+
+typedef struct LogicalRepSequenceInfo
+{
+ char    *seqname;
+ char    *nspname;
+ Oid localrelid;
+ bool remote_seq_queried;
+ Oid seqowner;
+ bool entry_valid;
+} LogicalRepSequenceInfo;

No comments?

======
src/include/commands/sequence.h

12.
+#define SEQ_LOG_CNT_INVALID 0
+

Unused?

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#443Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#439)
Re: Logical Replication of sequences

On Mon, Oct 27, 2025 at 5:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Oct 27, 2025 at 12:19 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 27 Oct 2025 at 10:04, Dilip Kumar <dilipbalaut@gmail.com> wrote:

One question, I am not sure if this has been discussed before, So while getting sequence information from remote we are also getting the page_lsn of the sequence and we are storing that in pg_subscription_rel. Is it just for the user to see and compare whether the sequence is synced to the latest lsn or is it used for anything else as well? In our patch sert, I don't see much usability information about this field.

This is mainly intended for the following purposes: a) To determine
whether the sequence requires resynchronization by comparing it with
the latest LSN on the publisher b. ) To maintain consistency with
table synchronization behavior. c) To inform users up to which LSN
the sequence has been synchronized.
Further details will be documented in an upcoming patch.

Can we use it to build an auto-sequence-sync feature? One can imagine
that at some threshold interval apply_worker can check if any of the
replicated sequences are out-of-sync and if so, then sync those. We
can do this before the apply_worker waits for some activity or on a
clean shutdown.

We can even consider letting sequence sync worker do this by not
exiting it after syncing all required sequences. Having said that,
even if this is feasible, we should consider it as a top-up patch
after the sequence sync worker patch is committed.

--
With Regards,
Amit Kapila.

#444Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#432)
Re: Logical Replication of sequences

On Mon, Oct 27, 2025 at 8:23 AM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

On Friday, October 24, 2025 11:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 23 Oct 2025 at 16:47, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 23, 2025 at 11:45 AM vignesh C <vignesh21@gmail.com> wrote:

The attached patch has the changes for the same.

I have pushed 0001 and the following are comments on 0002.

The attached v20251024 version patch has the changes for the same.
The comments from [1] have also been addressed in this version.

Thanks for updating the patch.

I was reviewing 0003 and have some thoughts for simplifying the codes related to
sequence state invalidations and hash tables:

1. I'm considering whether we could lock sequences at the start and maintain
these locks until the copy process finishes, allowing us to remove
invalidation codes.

I understand that the current process is:

1. start a transaction to fetch namespace/seqname for all the sequences in
the pg_subscription_rel
2. start multiple transation and handle a batch of in each transaction

So if there are sequence is altered between step 1 and 2, then we need to
skip the renamed or dropped sequences in step 2 and invalidates the hash
entry which looks inelegant.

To improve this, my proposal is to postpone the namespace/seqname fetch logic
until the second step. Initially, we would fetch just the sequence OIDs.
Then, in step 2, we would fetch the namespace/seqname after locking the
sequence. This approach ensures that any concurrent RENAME operations between
steps are irrelevant, as we will use the latest sequence names to query the
publisher, preventing any RENAME during step 2.

I think this can lead to undetected deadlock for operations across
nodes. Consider the following example: Say on each node, we have an
AlterSequence operation being performed by a concurrent backend in the
form below.

On Node-1:
----------------
Begin
step-1
sequence sync worker: copy_sequences, locked sequence (say seq-1) in
RowExclusive mode;

Begin;
step-2
Alter Sequence seq-1... --step-2 wait on step-1

step-3
Query on pg_get_sequence_data (from Node-2) will wait for Alter
Sequence. --step-3 wait on step-2

On Node-2:
----------------
Begin;
step-1
sequence sync worker: copy_sequences, locked sequence (say seq-1) in
RowExclusive mode;

Begin
step-2
Alter Sequence seq-1 ... -- step-2 wait on step-1

step-3
Query on pg_get_sequence_data (from Node-1) will wait for Alter
Sequence. --step-3 wait on step-2

If the above scenario is possible then the two nodes will create a
deadlock which can't be detected.

--
With Regards,
Amit Kapila.

#445vignesh C
vignesh21@gmail.com
In reply to: Chao Li (#438)
4 attachment(s)
Re: Logical Replication of sequences

On Mon, 27 Oct 2025 at 14:42, Chao Li <li.evan.chao@gmail.com> wrote:

On Oct 24, 2025, at 23:22, vignesh C <vignesh21@gmail.com> wrote:

Regards,
Vignesh
<v20251024-0001-Rename-sync_error_count-to-tbl_sync_error_.patch><v20251024-0002-Add-worker-type-argument-to-logicalrep_wor.patch><v20251024-0003-New-worker-for-sequence-synchronization-du.patch><v20251024-0004-Documentation-for-sequence-synchronization.patch>

The changes in 0001 are straightforward, looks good. I haven’t reviewed 0004 yet. Got a few comments for 0002 and 0003.

5 - 0003
```
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+       LogicalRepWorker *worker;
+
+       LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+       /*
+        * Set the last_seqsync_start_time for the sequence worker in the apply
+        * worker instead of the sequence sync worker, as the sequence sync worker
+        * has finished and is about to exit.
+        */
+       worker = logicalrep_worker_find(MyLogicalRepWorker->subid, InvalidOid,
+                                                                       WORKERTYPE_APPLY, true);
+       if (worker)
+               worker->last_seqsync_start_time = 0;
+
+       LWLockRelease(LogicalRepWorkerLock);
+}
```

Two comments for this new function:

* The function comment and in-code comment are redundant. Suggesting move the in-code comment to function comment.
* Why LW_SHARED is used? We are writing worker->last_seqsync_start_time, shouldn’t LW_EXCLUSIVE be used?

There will be only one sequence sync worker and only this process is
going to update this, so LW_SHARED is enough to find the apply worker.

6 - 0003
```
+       /*
+        * Count running sync workers for this subscription, while we have the
+        * lock.
+        */
+       nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+       LWLockRelease(LogicalRepWorkerLock);
+
+       launch_sync_worker(nsyncworkers, InvalidOid,
+                                          &MyLogicalRepWorker->last_seqsync_start_time);
```

I think here could be a race condition. Because the lock is acquired in LW_SHARED, meaning multiple caller may get the same nsyncworkers. Then it launches sync worker based on nsyncworkers, which would use inaccurate nsyncworkers, because between LWLockRelease() and launch_sync_worker(), another worker might be started.

But if that is not the case, only one caller should call ProcessSyncingSequencesForApply(), then why the lock is needed?

Sequence sync worker will be started only by the apply worker, another
worker cannot be started for this subscription between LWLockRelease()
and launch_sync_worker() as this apply worker is responsible for it
and the apply worker is active with current work. Same logic is used
for table sync workers too.

7 - 0003
```
+       if (insuffperm_seqs->len)
+       {
+               appendStringInfo(combined_error_detail, "Insufficient permission for sequence(s): (%s)",
+                                                insuffperm_seqs->data);
+               appendStringInfoString(combined_error_hint, "Grant permissions for the sequence(s).");
+       }
```

“Grant permissions” is unclear. Should it be “Grant UPDATE privilege”?

Modified

8 - 0003
```
+ appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to have matching parameters as publishers.");
```

“To have matching parameters as publishers” grammatically sound not good. Maybe revision to “to match the publisher’s parameters”.

Modified

9 - 0003
```
+               /*
+                * current_indexes is not incremented sequentially because some
+                * sequences may be missing, and the number of fetched rows may not
+                * match the batch size. The `hash_search` with HASH_REMOVE takes care
+                * of the count.
+                */
```

Typo: current_indexes => current_index

Modified

10 - 0003
```
-       /* Find the leader apply worker and signal it. */
-       logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+       /*
+        * This is a clean exit of the sequencesync worker; reset the
+        * last_seqsync_start_time.
+        */
+       if (wtype == WORKERTYPE_SEQUENCESYNC)
+               logicalrep_reset_seqsync_start_time();
+       else
+               /* Find the leader apply worker and signal it. */
+               logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
```

The comment “this is a clean exist of sequencsync worker” is specific to “if”, so suggesting moving into “if”. And “this is a clean exis of the sequencesyc worker” is not needed, keep consistent with the comment in “else”.

Modified

11 - 0003
```
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+       /* If there is a free sync worker slot, start a new sync worker */
+       if (nsyncworkers < max_sync_workers_per_subscription)
+       {
```

The entire function is under an “if”, so we can do “if (!…) return”, so saves a level of indent.

Modified

Peter's comments from [1]/messages/by-id/CAHut+PtMc1fr6cQvUAnxRE+buim5m-d9M2dM0YAeEHNkS9KzBw@mail.gmail.com have also been addressed. The attached
v20251029 version patch has the changes for the same.

[1]: /messages/by-id/CAHut+PtMc1fr6cQvUAnxRE+buim5m-d9M2dM0YAeEHNkS9KzBw@mail.gmail.com

Regards,
Vignesh

Attachments:

v20251029-0001-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20251029-0001-New-worker-for-sequence-synchronization-du.patchDownload
From 2ac0808c8107a52cd4272fa12f84c447f91b62de Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 15:31:13 +0530
Subject: [PATCH v20251029 1/4] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (The command syntax remains unchanged from PG18 to PG19.)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (The command syntax remains unchanged from PG18 to PG19.)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - (A new command introduced in PG19 by a prior patch.)
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/commands/sequence.c               |  21 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  51 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 794 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 112 ++-
 src/backend/replication/logical/tablesync.c   |  65 +-
 src/backend/replication/logical/worker.c      |  68 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   2 +-
 src/include/catalog/pg_subscription_rel.h     |  19 +
 src/include/commands/sequence.h               |   1 +
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  23 +-
 src/test/subscription/t/036_sequences.pl      | 193 ++++-
 src/tools/pgindent/typedefs.list              |   3 +
 18 files changed, 1262 insertions(+), 104 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 15b233a37d8..1945627ed88 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index c23dee5231c..9d1dc87ceb1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,8 +953,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1056,7 +1055,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1064,14 +1063,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1080,7 +1079,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1798,7 +1797,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1842,6 +1842,11 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * For details about recording the LSN, see the
+		 * UpdateSubscriptionRelState() call in copy_sequence().
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 95b5cae9a55..e2024680cf9 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -248,9 +248,10 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given worker type,
  * subscription id, and relation id.
  *
- * For apply workers, the relid should be set to InvalidOid, as they manage
- * changes across all tables. For table sync workers, the relid should be set
- * to the OID of the relation being synchronized.
+ * For apply workers and sequencesync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables. For tablesync
+ * workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
 logicalrep_worker_find(LogicalRepWorkerType wtype, Oid subid, Oid relid,
@@ -334,6 +335,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -422,7 +424,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -511,8 +514,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -848,6 +859,29 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ *
+ * Note that this value is not stored in the sequencesync worker, because that
+ * has finished already and is about to exit.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(WORKERTYPE_APPLY,
+									MyLogicalRepWorker->subid, InvalidOid,
+									true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -896,7 +930,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1610,7 +1644,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1650,6 +1684,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..6b582cae8c0
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,794 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state INIT, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.
+ *
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel. It begins by retrieving the list of sequences flagged
+ * for synchronization. These sequences are then processed in batches, allowing
+ * multiple entries to be synchronized within a single transaction. The worker
+ * fetches the current sequence values and page LSNs from the remote publisher,
+ * updates the corresponding sequences on the local subscriber, and finally
+ * marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ * b) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * c) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "common/hashfn.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicallauncher.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/catcache.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/pg_lsn.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 11
+
+typedef enum CopySeqResult
+{
+	COPYSEQ_SUCCESS,
+	COPYSEQ_MISMATCH,
+	COPYSEQ_INSUFFICIENT_PERM,
+	COPYSEQ_SKIPPED
+} CopySeqResult;
+
+static HTAB *sequences_to_copy = NULL;
+
+/*
+ * Handle sequence synchronization cooperation from the apply worker.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(&has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(WORKERTYPE_SEQUENCESYNC,
+												 MyLogicalRepWorker->subid,
+												 InvalidOid, true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_error_sequences
+ *
+ * Report discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs,
+					   StringInfo missing_seqs)
+{
+	StringInfo	combined_error_detail = makeStringInfo();
+	StringInfo	combined_error_hint = makeStringInfo();
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(combined_error_detail, "Insufficient privileges on the sequence(s): (%s)",
+						 insuffperm_seqs->data);
+		appendStringInfoString(combined_error_hint, "Grant UPDATE privilege on the sequence(s).");
+	}
+
+	if (mismatched_seqs->len)
+	{
+		if (insuffperm_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; mismatched sequence(s) on subscriber: (%s)",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, " For mismatched sequences, alter or re-create local sequences to match the publisher's parameters.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Mismatched sequence(s) on subscriber: (%s)",
+							 mismatched_seqs->data);
+			appendStringInfoString(combined_error_hint, "For mismatched sequences, alter or re-create local sequences to match the publisher's parameters.");
+		}
+	}
+
+	if (missing_seqs->len)
+	{
+		if (insuffperm_seqs->len || mismatched_seqs->len)
+		{
+			appendStringInfo(combined_error_detail, "; missing sequence(s) on publisher: (%s)",
+							 missing_seqs->data);
+			appendStringInfoString(combined_error_hint, " For missing sequences, remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.");
+		}
+		else
+		{
+			appendStringInfo(combined_error_detail, "Missing sequence(s) on publisher: (%s)",
+							 missing_seqs->data);
+			appendStringInfoString(combined_error_hint, "For missing sequences, remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.");
+		}
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s.", combined_error_detail->data),
+			errhint("%s", combined_error_hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+/*
+ * get_remote_sequence_info
+ *
+ * Extract remote sequence information from a tuple slot received from the
+ * publisher.
+ */
+static void
+get_remote_sequence_info(TupleTableSlot *slot, LogicalRepSeqHashKey *key,
+						 int64 *last_value, bool *is_called,
+						 XLogRecPtr *page_lsn, Oid *remote_typid,
+						 int64 *remote_start, int64 *remote_increment,
+						 int64 *remote_min, int64 *remote_max,
+						 bool *remote_cycle)
+{
+	bool		isnull;
+	int			col = 0;
+
+	key->nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	key->seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_typid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_start = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_increment = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_min = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_max = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_cycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+}
+
+/*
+ * Compare sequence parameters from publisher with local sequence.
+ */
+static CopySeqResult
+validate_sequence(Relation sequence_rel, LogicalRepSequenceInfo *seqinfo,
+				  Oid remote_typid, int64 remote_start,
+				  int64 remote_increment, int64 remote_min,
+				  int64 remote_max, bool remote_cycle)
+{
+	Form_pg_sequence local_seq;
+	HeapTuple	tup;
+	CopySeqResult	result = COPYSEQ_SUCCESS;
+
+	/* Sequence was concurrently dropped */
+	if (!sequence_rel)
+		return COPYSEQ_SKIPPED;
+
+	/* Sequence was concurrently dropped */
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+	if (!HeapTupleIsValid(tup))
+		return COPYSEQ_SKIPPED;
+
+	/* Sequence was concurrently invalidated */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		return COPYSEQ_SKIPPED;
+	}
+
+	local_seq = (Form_pg_sequence) GETSTRUCT(tup);
+	if (local_seq->seqtypid != remote_typid ||
+		local_seq->seqstart != remote_start ||
+		local_seq->seqincrement != remote_increment ||
+		local_seq->seqmin != remote_min ||
+		local_seq->seqmax != remote_max ||
+		local_seq->seqcycle != remote_cycle)
+		result = COPYSEQ_MISMATCH;
+
+	ReleaseSysCache(tup);
+	return result;
+}
+
+/*
+ * Apply remote sequence state to local sequence and mark it as synchronized.
+ */
+static CopySeqResult
+copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
+			  bool is_called, XLogRecPtr page_lsn)
+{
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+
+	/*
+	 * Make sure that the sequence is copied as table owner, unless the user
+	 * has opted out of that behaviour.
+	 */
+	if (!MySubscription->runasowner)
+		SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+	aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+	if (aclresult != ACLCHECK_OK)
+	{
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		return COPYSEQ_INSUFFICIENT_PERM;
+	}
+
+	SetSequence(seqinfo->localrelid, last_value, is_called);
+
+	if (!run_as_owner)
+		RestoreUserContext(&ucxt);
+
+	/*
+	 * Record the remote sequence’s LSN in pg_subscription_rel and mark the
+	 * sequence as READY. The LSN represents the WAL position of the remote
+	 * sequence at the time it was synchronized.
+	 */
+	UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+							   SUBREL_STATE_READY, page_lsn, false);
+
+	return COPYSEQ_SUCCESS;
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ */
+static void
+copy_sequences(WalReceiverConn *conn, Oid subid)
+{
+	int			total_seqs = hash_get_num_entries(sequences_to_copy);
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, total_seqs));
+
+	while (current_index < total_seqs)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+		hash_seq_init(&status, sequences_to_copy);
+
+		/* Collect a batch of sequences */
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->remote_seq_queried)
+				continue;
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
+			entry->remote_seq_queried = true;
+
+			batch_size++;
+			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
+				(current_index + batch_size == total_seqs))
+				break;
+		}
+
+		hash_seq_term(&status);
+
+		appendStringInfo(cmd,
+						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not receive list of sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			LogicalRepSequenceInfo *seqinfo;
+			LogicalRepSeqHashKey key;
+			int64		last_value;
+			bool		is_called;
+			XLogRecPtr	page_lsn;
+			Oid			remote_typid;
+			int64		remote_start;
+			int64		remote_increment;
+			int64		remote_min;
+			int64		remote_max;
+			bool		remote_cycle;
+			bool		found;
+			CopySeqResult result;
+			Relation	sequence_rel;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			get_remote_sequence_info(slot, &key, &last_value, &is_called,
+									 &page_lsn, &remote_typid, &remote_start,
+									 &remote_increment, &remote_min,
+									 &remote_max, &remote_cycle);
+
+			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
+			Assert(seqinfo);
+
+			/* Try to open sequence */
+			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+
+			result = validate_sequence(sequence_rel, seqinfo, remote_typid,
+									   remote_start, remote_increment,
+									   remote_min, remote_max, remote_cycle);
+			if (result == COPYSEQ_SUCCESS)
+				result = copy_sequence(seqinfo, last_value, is_called, page_lsn);
+
+			switch (result)
+			{
+				case COPYSEQ_MISMATCH:
+					append_sequence_name(mismatched_seqs, key.nspname,
+										 key.seqname, &batch_mismatched_count);
+					break;
+				case COPYSEQ_INSUFFICIENT_PERM:
+					append_sequence_name(insuffperm_seqs, key.nspname,
+										 key.seqname, &batch_insuffperm_count);
+					break;
+				case COPYSEQ_SKIPPED:
+					ereport(LOG,
+							errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered or dropped concurrently",
+								   key.nspname, key.seqname));
+					batch_skipped_count++;
+					break;
+				default:
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name, key.nspname,
+											key.seqname));
+					batch_succeeded_count++;
+					break;
+			}
+
+			/* Remove processed sequence from the hash table. */
+			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
+				elog(ERROR, "hash table corrupted");
+
+			if (sequence_rel)
+				table_close(sequence_rel, NoLock);
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_index is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
+		 * of the count.
+		 */
+		current_index += batch_size;
+	}
+
+	/*
+	 * Any sequences remaining in the hash table were not found on the
+	 * publisher. This is because they were included in a query
+	 * (remote_seq_queried) but were not returned in the result set.
+	 */
+	hash_seq_init(&status, sequences_to_copy);
+	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+	{
+		Assert(entry->remote_seq_queried);
+		append_sequence_name(missing_seqs, entry->nspname, entry->seqname,
+							 NULL);
+	}
+
+	/* Report permission issues, mismatches, or missing sequences */
+	if (insuffperm_seqs->len || mismatched_seqs->len || missing_seqs->len)
+		report_error_sequences(insuffperm_seqs, mismatched_seqs, missing_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	HASH_SEQ_STATUS status;
+	LogicalRepSequenceInfo *entry;
+
+	/* Quick exit if no sequence is listed yet */
+	if (hash_get_num_entries(sequences_to_copy) == 0)
+		return;
+
+	if (reloid != InvalidOid)
+	{
+		hash_seq_init(&status, sequences_to_copy);
+
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		{
+			if (entry->localrelid == reloid)
+			{
+				entry->entry_valid = false;
+				hash_seq_term(&status);
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		hash_seq_init(&status, sequences_to_copy);
+		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+			entry->entry_valid = false;
+	}
+}
+
+static uint32
+LogicalRepSeqHash(const void *key, Size keysize)
+{
+	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
+	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
+	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
+
+	return h1 ^ h2;
+}
+
+static int
+LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
+{
+	int			cmp;
+	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
+	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
+
+	/* Compare by namespace name first */
+	cmp = strcmp(k1->nspname, k2->nspname);
+	if (cmp != 0)
+		return cmp;
+
+	/* If namespace names are equal, compare by sequence name */
+	return strcmp(k1->seqname, k2->seqname);
+}
+
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+	HASHCTL		ctl;
+	bool		found;
+	LogicalRepSequenceInfo *seq_entry;
+
+	ctl.keysize = sizeof(LogicalRepSeqHashKey);
+	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
+	ctl.hcxt = CacheMemoryContext;
+	ctl.hash = LogicalRepSeqHash;
+	ctl.match = LogicalRepSeqMatchFunc;
+	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
+									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(SUBREL_STATE_INIT));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		Relation	sequence_rel;
+		LogicalRepSeqHashKey key;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/* Skip if sequence was dropped concurrently */
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+		if (!sequence_rel)
+			continue;
+
+		key.seqname = RelationGetRelationName(sequence_rel);
+		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+		/* Allocate the tracking info in a permanent memory context. */
+		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+		Assert(!found);
+
+		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+		seq_entry->seqname = pstrdup(key.seqname);
+		seq_entry->nspname = pstrdup(key.nspname);
+		seq_entry->localrelid = subrel->srrelid;
+		seq_entry->remote_seq_queried = false;
+		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+		seq_entry->entry_valid = true;
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, RowExclusiveLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	/* If there are any sequences that need to be copied */
+	if (hash_get_num_entries(sequences_to_copy))
+		copy_sequences(LogRepWorkerWalRcvConn, subid);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker();
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index ae8c9385916..414e63acbb6 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -46,8 +47,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker()
 {
+	Assert(am_tablesync_worker() || am_sequencesync_worker());
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,15 +65,28 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (am_tablesync_worker())
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+	else
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
-							 InvalidOid);
+	if (am_sequencesync_worker())
+	{
+		/* Find the leader apply worker and reset last_seqsync_start_time. */
+		logicalrep_reset_seqsync_start_time();
+	}
+	else
+	{
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
+								 InvalidOid);
+	}
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -86,7 +102,47 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequencesync worker, actual relid for tablesync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz *last_start_time)
+{
+	TimestampTz now;
+
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers >= max_sync_workers_per_subscription)
+		return;
+
+	now = GetCurrentTimestamp();
+
+	if (!(*last_start_time) ||
+		TimestampDifferenceExceeds(*last_start_time, now,
+								   wal_retrieve_retry_interval))
+	{
+		/*
+		 * Set the last_start_time even if we fail to start the worker, so that
+		 * we won't retry until wal_retrieve_retry_interval has elapsed.
+		 */
+		*last_start_time = now;
+		(void) logicalrep_worker_launch((OidIsValid(relid)) ? WORKERTYPE_TABLESYNC : WORKERTYPE_SEQUENCESYNC,
+										MyLogicalRepWorker->dbid,
+										MySubscription->oid,
+										MySubscription->name,
+										MyLogicalRepWorker->userid,
+										relid, DSM_HANDLE_INVALID, false);
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -108,6 +164,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -117,17 +179,24 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * Returns true if subscription has 1 or more tables, else false.
  */
 bool
-FetchRelationStates(bool *started_tx)
+FetchRelationStates(bool *has_pending_sequences, bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready are declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
 	*started_tx = false;
 
@@ -139,6 +208,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -150,8 +220,8 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
@@ -160,7 +230,12 @@ FetchRelationStates(bool *started_tx)
 		{
 			rstate = palloc(sizeof(SubscriptionRelState));
 			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+
+			if (get_rel_relkind(rstate->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -185,5 +260,8 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
+	if (has_pending_sequences)
+		*has_pending_sequences = has_subsequences_non_ready;
+
 	return has_subtables;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 58c98488d7b..8d85ba4e419 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -381,7 +381,7 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(NULL, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -415,6 +415,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -428,11 +436,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -552,43 +555,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(nsyncworkers, rstate->relid,
+								   &hentry->last_start_time);
 			}
 		}
 	}
@@ -1596,7 +1575,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1619,10 +1598,10 @@ bool
 AllTablesyncsReady(void)
 {
 	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_tables = FetchRelationStates(NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1634,7 +1613,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1652,7 +1631,7 @@ HasSubscriptionTablesCached(void)
 	bool		has_subrels;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	has_subrels = FetchRelationStates(NULL, &started_tx);
 
 	if (started_tx)
 	{
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 7edd1c9cf06..57be632ae98 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1243,7 +1248,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1365,7 +1373,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1421,7 +1432,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1487,7 +1501,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1622,7 +1639,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2465,7 +2485,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4137,7 +4160,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5580,7 +5606,7 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid, true);
 
 			PG_RE_THROW();
 		}
@@ -5700,8 +5726,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5812,6 +5838,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5831,14 +5861,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5921,9 +5953,15 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	if (am_leader_apply_worker() || am_tablesync_worker())
+	{
+		/*
+		* Report the worker failed during either table synchronization or
+		* apply.
+		*/
+		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+										!am_tablesync_worker());
+	}
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index eecb43ec6f0..7bd3fed1f68 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9f88498ecd3..49d1dd724bd 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,25 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+typedef struct LogicalRepSeqHashKey
+{
+	const char *seqname;
+	const char *nspname;
+} LogicalRepSeqHashKey;
+
+/*
+ * Stores metadata about a sequence involved in logical replication.
+ */
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		remote_seq_queried;
+	Oid			seqowner;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..46b4d89dd6e 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index e23fa9a4514..b6fc9e7a6de 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -264,6 +267,8 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
+extern void launch_sync_worker(int nsyncworkers, Oid relid,
+							   TimestampTz *last_start_time);
 extern void logicalrep_worker_stop(LogicalRepWorkerType wtype, Oid subid,
 								   Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
@@ -272,6 +277,7 @@ extern void logicalrep_worker_wakeup(LogicalRepWorkerType wtype, Oid subid,
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -282,11 +288,12 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
 pg_noreturn extern void FinishSyncWorker(void);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern bool FetchRelationStates(bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -351,15 +358,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index 557fc91c017..986e2f360c4 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -1,7 +1,7 @@
 
 # Copyright (c) 2025, PostgreSQL Global Development Group
 
-# This tests that sequences are registered to be synced to the subscriber
+# This tests that sequences are synced correctly to the subscriber
 use strict;
 use warnings;
 use PostgreSQL::Test::Cluster;
@@ -14,6 +14,7 @@ my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
 # Avoid checkpoint during the test, otherwise, extra values will be fetched for
 # the sequences which will cause the test to fail randomly.
 $node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
 $node_publisher->start;
 
 # Initialize subscriber node
@@ -28,7 +29,15 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE regress_s4
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -46,10 +55,182 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
-my $result = $node_subscriber->safe_psql('postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'REFRESH PUBLICATION does not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH SEQUENCES should re-synchronize all existing
+# sequences, but should not synchronize any newly added sequences.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION with (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '100|32|t', 'Check sequence value in the publisher');
+
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s4;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should report an error when:
+# a) sequence definitions differ between the publisher and subscriber, or
+# b) a sequence is missing on the publisher.
+##########
+
+# Create a new sequence 'regress_s5' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s5 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
 );
-is($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+# Confirm that the error for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s5"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s5;
+));
+
+# Confirm that the error for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s5"\)/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index bb4e1b37005..0abce776a56 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -526,6 +526,7 @@ CopyMethod
 CopyMultiInsertBuffer
 CopyMultiInsertInfo
 CopyOnErrorChoice
+CopySeqResult
 CopySource
 CopyStmt
 CopyToRoutine
@@ -1628,6 +1629,8 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSeqHashKey
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251029-0003-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20251029-0003-Documentation-for-sequence-synchronization.patchDownload
From 06e5a33bcbd06e5ca7035bfdeaeee3c8092ed2bf Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:18:07 +0530
Subject: [PATCH v20251029 3/4] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 230 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 282 insertions(+), 29 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a2a8b49fdb..9d54f8b26ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..86c778fc1f4 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the state of
+   sequences can be synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,200 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   on the subscriber:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An error is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to retry until
+    all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences on the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2286,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2622,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2636,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d5f0fb7ba7c..242105e2cba 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

v20251029-0004-Add-seq_sync_error_count-to-subscription-s.patchapplication/octet-stream; name=v20251029-0004-Add-seq_sync_error_count-to-subscription-s.patchDownload
From 3006891efc59a4429d9c7a401c6892ae31b8e9f8 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 28 Oct 2025 16:35:50 +0530
Subject: [PATCH v20251029 4/4] Add seq_sync_error_count to subscription
 statistics.

This commit introduces a new column seq_sync_error_count to subscription
statistics. The new field tracks the number of errors encountered during
sequence synchronization for each subscription.
---
 doc/src/sgml/monitoring.sgml                  |  9 +++
 src/backend/catalog/system_views.sql          |  1 +
 .../replication/logical/sequencesync.c        |  3 +
 src/backend/replication/logical/tablesync.c   |  3 +-
 src/backend/replication/logical/worker.c      | 18 ++---
 .../utils/activity/pgstat_subscription.c      | 27 +++++--
 src/backend/utils/adt/pgstatfuncs.c           | 27 ++++---
 src/include/catalog/pg_proc.dat               |  6 +-
 src/include/pgstat.h                          |  6 +-
 src/test/regress/expected/rules.out           |  3 +-
 src/test/subscription/t/026_stats.pl          | 80 ++++++++++++-------
 11 files changed, 123 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 242105e2cba..0b2402b6ea6 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2193,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..2b1854fd940 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.seq_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 282a30d0cda..b42e5b1b422 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -661,6 +661,9 @@ start_sequence_sync()
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
 			PG_RE_THROW();
 		}
 	}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 8d85ba4e419..87d3e723579 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1530,7 +1530,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 57be632ae98..cc02f35136f 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -5606,7 +5606,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, true);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_APPLY);
 
 			PG_RE_THROW();
 		}
@@ -5953,15 +5954,12 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	if (am_leader_apply_worker() || am_tablesync_worker())
-	{
-		/*
-		* Report the worker failed during either table synchronization or
-		* apply.
-		*/
-		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-										!am_tablesync_worker());
-	}
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+									 MyLogicalRepWorker->type);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..35916772b9d 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->seq_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(seq_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..0184a7eef61 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "seq_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* seq_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->seq_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 7bd3fed1f68..b776bc02237 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,seq_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..a2b52d97b55 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..cc872414af5 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.seq_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, seq_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..23f3511f9a4 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and seq_sync_error_count > 0 and sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,14 +149,17 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset
+# timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -151,8 +167,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Check that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
 # Reset a single subscription
@@ -160,10 +176,12 @@ $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats((SELECT subid FROM pg_stat_subscription_stats WHERE subname = '$sub1_name')))
 );
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -171,8 +189,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
 # Get reset timestamp
@@ -198,14 +216,17 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0
+# and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -213,18 +234,20 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
 # Reset all subscriptions
 $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats(NULL)));
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -232,13 +255,14 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -246,8 +270,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
 $reset_time1 = $node_subscriber->safe_psql($db,
-- 
2.43.0

v20251029-0002-Replace-Hash-table-with-a-List-and-elimina.patchapplication/octet-stream; name=v20251029-0002-Replace-Hash-table-with-a-List-and-elimina.patchDownload
From 7d3c174bd5f74f0052b9d244bfa26d3b2401d0d1 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Tue, 28 Oct 2025 07:25:43 +0530
Subject: [PATCH v20251029 2/4] Replace Hash table with a List and eliminate
 invalidations

Replace Hash table with a List and eliminate invalidations
---
 .../replication/logical/sequencesync.c        | 303 ++++++------------
 src/include/catalog/pg_subscription_rel.h     |   3 +-
 2 files changed, 96 insertions(+), 210 deletions(-)

diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 6b582cae8c0..282a30d0cda 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -64,7 +64,6 @@
 #include "catalog/pg_sequence.h"
 #include "catalog/pg_subscription_rel.h"
 #include "commands/sequence.h"
-#include "common/hashfn.h"
 #include "pgstat.h"
 #include "postmaster/interrupt.h"
 #include "replication/logicallauncher.h"
@@ -77,22 +76,20 @@
 #include "utils/guc.h"
 #include "utils/inval.h"
 #include "utils/lsyscache.h"
+#include "utils/memutils.h"
 #include "utils/pg_lsn.h"
 #include "utils/syscache.h"
 #include "utils/usercontext.h"
 
-#define REMOTE_SEQ_COL_COUNT 11
+#define REMOTE_SEQ_COL_COUNT 10
 
 typedef enum CopySeqResult
 {
 	COPYSEQ_SUCCESS,
 	COPYSEQ_MISMATCH,
-	COPYSEQ_INSUFFICIENT_PERM,
-	COPYSEQ_SKIPPED
+	COPYSEQ_INSUFFICIENT_PERM
 } CopySeqResult;
 
-static HTAB *sequences_to_copy = NULL;
-
 /*
  * Handle sequence synchronization cooperation from the apply worker.
  *
@@ -228,7 +225,7 @@ append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
  * publisher.
  */
 static void
-get_remote_sequence_info(TupleTableSlot *slot, LogicalRepSeqHashKey *key,
+get_remote_sequence_info(TupleTableSlot *slot, int *seqidx,
 						 int64 *last_value, bool *is_called,
 						 XLogRecPtr *page_lsn, Oid *remote_typid,
 						 int64 *remote_start, int64 *remote_increment,
@@ -238,10 +235,7 @@ get_remote_sequence_info(TupleTableSlot *slot, LogicalRepSeqHashKey *key,
 	bool		isnull;
 	int			col = 0;
 
-	key->nspname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
-	Assert(!isnull);
-
-	key->seqname = TextDatumGetCString(slot_getattr(slot, ++col, &isnull));
+	*seqidx = DatumGetInt32(slot_getattr(slot, ++col, &isnull));
 	Assert(!isnull);
 
 	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
@@ -279,30 +273,19 @@ get_remote_sequence_info(TupleTableSlot *slot, LogicalRepSeqHashKey *key,
  * Compare sequence parameters from publisher with local sequence.
  */
 static CopySeqResult
-validate_sequence(Relation sequence_rel, LogicalRepSequenceInfo *seqinfo,
-				  Oid remote_typid, int64 remote_start,
-				  int64 remote_increment, int64 remote_min,
+validate_sequence(LogicalRepSequenceInfo *seqinfo, Oid remote_typid,
+				  int64 remote_start, int64 remote_increment, int64 remote_min,
 				  int64 remote_max, bool remote_cycle)
 {
 	Form_pg_sequence local_seq;
 	HeapTuple	tup;
 	CopySeqResult	result = COPYSEQ_SUCCESS;
 
-	/* Sequence was concurrently dropped */
-	if (!sequence_rel)
-		return COPYSEQ_SKIPPED;
-
 	/* Sequence was concurrently dropped */
 	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
 	if (!HeapTupleIsValid(tup))
-		return COPYSEQ_SKIPPED;
-
-	/* Sequence was concurrently invalidated */
-	if (!seqinfo->entry_valid)
-	{
-		ReleaseSysCache(tup);
-		return COPYSEQ_SKIPPED;
-	}
+		elog(ERROR, "cache lookup failed for sequence %u",
+			 seqinfo->localrelid);
 
 	local_seq = (Form_pg_sequence) GETSTRUCT(tup);
 	if (local_seq->seqtypid != remote_typid ||
@@ -327,6 +310,7 @@ copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
 	UserContext ucxt;
 	AclResult	aclresult;
 	bool		run_as_owner = MySubscription->runasowner;
+	Oid			seqoid = seqinfo->localrelid;
 
 	/*
 	 * Make sure that the sequence is copied as table owner, unless the user
@@ -335,7 +319,8 @@ copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
 	if (!MySubscription->runasowner)
 		SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
 
-	aclresult = pg_class_aclcheck(seqinfo->localrelid, GetUserId(), ACL_UPDATE);
+	aclresult = pg_class_aclcheck(seqoid, GetUserId(), ACL_UPDATE);
+
 	if (aclresult != ACLCHECK_OK)
 	{
 		if (!run_as_owner)
@@ -344,7 +329,7 @@ copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
 		return COPYSEQ_INSUFFICIENT_PERM;
 	}
 
-	SetSequence(seqinfo->localrelid, last_value, is_called);
+	SetSequence(seqoid, last_value, is_called);
 
 	if (!run_as_owner)
 		RestoreUserContext(&ucxt);
@@ -354,7 +339,7 @@ copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
 	 * sequence as READY. The LSN represents the WAL position of the remote
 	 * sequence at the time it was synchronized.
 	 */
-	UpdateSubscriptionRelState(MySubscription->oid, seqinfo->localrelid,
+	UpdateSubscriptionRelState(MySubscription->oid, seqoid,
 							   SUBREL_STATE_READY, page_lsn, false);
 
 	return COPYSEQ_SUCCESS;
@@ -364,65 +349,80 @@ copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
  * Copy existing data of sequences from the publisher.
  */
 static void
-copy_sequences(WalReceiverConn *conn, Oid subid)
+copy_sequences(WalReceiverConn *conn, List *seqinfos)
 {
-	int			total_seqs = hash_get_num_entries(sequences_to_copy);
 	int			current_index = 0;
 	StringInfo	mismatched_seqs = makeStringInfo();
 	StringInfo	missing_seqs = makeStringInfo();
 	StringInfo	insuffperm_seqs = makeStringInfo();
 	StringInfo	seqstr = makeStringInfo();
 	StringInfo	cmd = makeStringInfo();
-	HASH_SEQ_STATUS status;
-	LogicalRepSequenceInfo *entry;
 
 #define MAX_SEQUENCES_SYNC_PER_BATCH 100
 
 	ereport(LOG,
 			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
-				   MySubscription->name, total_seqs));
+				   MySubscription->name, list_length(seqinfos)));
 
-	while (current_index < total_seqs)
+	while (current_index < list_length(seqinfos))
 	{
-		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {TEXTOID, TEXTOID, INT8OID,
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID,
 		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
 		int			batch_size = 0;
 		int			batch_succeeded_count = 0;
 		int			batch_mismatched_count = 0;
-		int			batch_skipped_count = 0;
 		int			batch_insuffperm_count = 0;
 
 		WalRcvExecResult *res;
 		TupleTableSlot *slot;
+		ListCell	   *lc;
 
 		StartTransactionCommand();
-		hash_seq_init(&status, sequences_to_copy);
 
-		/* Collect a batch of sequences */
-		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
+		for_each_from(lc, seqinfos, current_index)
 		{
-			if (entry->remote_seq_queried)
+			Relation sequence_rel;
+			MemoryContext oldctx;
+			LogicalRepSequenceInfo *seqinfo = (LogicalRepSequenceInfo *) lfirst(lc);
+
+			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+
+			/* Skip if sequence was dropped concurrently */
+			if (!sequence_rel)
+			{
+				seqinfos = foreach_delete_current(seqinfos, lc);
 				continue;
+			}
+
+			/* Save sequence info */
+			oldctx = MemoryContextSwitchTo(TopMemoryContext);
+			seqinfo->seqname = pstrdup(RelationGetRelationName(sequence_rel));
+			seqinfo->nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+			seqinfo->seqowner = sequence_rel->rd_rel->relowner;
+			MemoryContextSwitchTo(oldctx);
+
+			/*
+			 * Hold the lock until transaction end to prevent concurrent
+			 * sequence alter operation.
+			 */
+			table_close(sequence_rel, NoLock);
 
 			if (seqstr->len > 0)
 				appendStringInfoString(seqstr, ", ");
 
-			appendStringInfo(seqstr, "(\'%s\', \'%s\')", entry->nspname, entry->seqname);
-			entry->remote_seq_queried = true;
+			appendStringInfo(seqstr, "(\'%s\', \'%s\', %d)",
+							 seqinfo->nspname, seqinfo->seqname,
+							 foreach_current_index(lc));
 
-			batch_size++;
-			if (batch_size == MAX_SEQUENCES_SYNC_PER_BATCH ||
-				(current_index + batch_size == total_seqs))
+			if (++batch_size == MAX_SEQUENCES_SYNC_PER_BATCH)
 				break;
 		}
 
-		hash_seq_term(&status);
-
 		appendStringInfo(cmd,
-						 "SELECT s.schname, s.seqname, ps.*, seq.seqtypid,\n"
+						 "SELECT s.seqidx, ps.*, seq.seqtypid,\n"
 						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
 						 "       seq.seqmax, seq.seqcycle\n"
-						 "FROM ( VALUES %s ) AS s (schname, seqname)\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname, seqidx)\n"
 						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
 						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
 						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
@@ -439,20 +439,18 @@ copy_sequences(WalReceiverConn *conn, Oid subid)
 		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
 		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
 		{
-			LogicalRepSequenceInfo *seqinfo;
-			LogicalRepSeqHashKey key;
 			int64		last_value;
 			bool		is_called;
 			XLogRecPtr	page_lsn;
+			int			seqidx;
 			Oid			remote_typid;
 			int64		remote_start;
 			int64		remote_increment;
 			int64		remote_min;
 			int64		remote_max;
 			bool		remote_cycle;
-			bool		found;
 			CopySeqResult result;
-			Relation	sequence_rel;
+			LogicalRepSequenceInfo *seqinfo;
 
 			CHECK_FOR_INTERRUPTS();
 
@@ -462,54 +460,42 @@ copy_sequences(WalReceiverConn *conn, Oid subid)
 				ProcessConfigFile(PGC_SIGHUP);
 			}
 
-			get_remote_sequence_info(slot, &key, &last_value, &is_called,
+			get_remote_sequence_info(slot, &seqidx, &last_value, &is_called,
 									 &page_lsn, &remote_typid, &remote_start,
 									 &remote_increment, &remote_min,
 									 &remote_max, &remote_cycle);
 
-			seqinfo = hash_search(sequences_to_copy, &key, HASH_FIND, &found);
-			Assert(seqinfo);
+			seqinfo = (LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);
+			seqinfo->found_on_pub = true;
 
-			/* Try to open sequence */
-			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
-
-			result = validate_sequence(sequence_rel, seqinfo, remote_typid,
+			result = validate_sequence(seqinfo, remote_typid,
 									   remote_start, remote_increment,
 									   remote_min, remote_max, remote_cycle);
+
 			if (result == COPYSEQ_SUCCESS)
-				result = copy_sequence(seqinfo, last_value, is_called, page_lsn);
+				result = copy_sequence(seqinfo, last_value, is_called,
+									   page_lsn);
 
 			switch (result)
 			{
+				case COPYSEQ_SUCCESS:
+					batch_succeeded_count++;
+
+					elog(DEBUG1, "logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+						 MySubscription->name, seqinfo->nspname, seqinfo->seqname);
+
+					break;
 				case COPYSEQ_MISMATCH:
-					append_sequence_name(mismatched_seqs, key.nspname,
-										 key.seqname, &batch_mismatched_count);
+					append_sequence_name(mismatched_seqs, seqinfo->nspname,
+										 seqinfo->seqname,
+										 &batch_mismatched_count);
 					break;
 				case COPYSEQ_INSUFFICIENT_PERM:
-					append_sequence_name(insuffperm_seqs, key.nspname,
-										 key.seqname, &batch_insuffperm_count);
-					break;
-				case COPYSEQ_SKIPPED:
-					ereport(LOG,
-							errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered or dropped concurrently",
-								   key.nspname, key.seqname));
-					batch_skipped_count++;
-					break;
-				default:
-					ereport(DEBUG1,
-							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
-											MySubscription->name, key.nspname,
-											key.seqname));
-					batch_succeeded_count++;
+					append_sequence_name(insuffperm_seqs, seqinfo->nspname,
+										 seqinfo->seqname,
+										 &batch_insuffperm_count);
 					break;
 			}
-
-			/* Remove processed sequence from the hash table. */
-			if (!hash_search(sequences_to_copy, &key, HASH_REMOVE, NULL))
-				elog(ERROR, "hash table corrupted");
-
-			if (sequence_rel)
-				table_close(sequence_rel, NoLock);
 		}
 
 		ExecDropSingleTupleTableSlot(slot);
@@ -518,10 +504,10 @@ copy_sequences(WalReceiverConn *conn, Oid subid)
 		resetStringInfo(cmd);
 
 		ereport(LOG,
-				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d mismatched, %d insufficient permission, %d missing from publisher",
 					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
-					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
-					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+					   batch_succeeded_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_mismatched_count + batch_insuffperm_count)));
 
 		/* Commit this batch, and prepare for next batch */
 		CommitTransactionCommand();
@@ -529,24 +515,15 @@ copy_sequences(WalReceiverConn *conn, Oid subid)
 		/*
 		 * current_index is not incremented sequentially because some
 		 * sequences may be missing, and the number of fetched rows may not
-		 * match the batch size. The `hash_search` with HASH_REMOVE takes care
-		 * of the count.
+		 * match the batch size.
 		 */
 		current_index += batch_size;
 	}
 
-	/*
-	 * Any sequences remaining in the hash table were not found on the
-	 * publisher. This is because they were included in a query
-	 * (remote_seq_queried) but were not returned in the result set.
-	 */
-	hash_seq_init(&status, sequences_to_copy);
-	while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
-	{
-		Assert(entry->remote_seq_queried);
-		append_sequence_name(missing_seqs, entry->nspname, entry->seqname,
-							 NULL);
-	}
+	foreach_ptr(LogicalRepSequenceInfo, seqinfo, seqinfos)
+		if (!seqinfo->found_on_pub)
+			append_sequence_name(missing_seqs, seqinfo->nspname,
+								 seqinfo->seqname, NULL);
 
 	/* Report permission issues, mismatches, or missing sequences */
 	if (insuffperm_seqs->len || mismatched_seqs->len || missing_seqs->len)
@@ -557,68 +534,6 @@ copy_sequences(WalReceiverConn *conn, Oid subid)
 	destroyStringInfo(insuffperm_seqs);
 }
 
-/*
- * Relcache invalidation callback
- */
-static void
-sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
-{
-	HASH_SEQ_STATUS status;
-	LogicalRepSequenceInfo *entry;
-
-	/* Quick exit if no sequence is listed yet */
-	if (hash_get_num_entries(sequences_to_copy) == 0)
-		return;
-
-	if (reloid != InvalidOid)
-	{
-		hash_seq_init(&status, sequences_to_copy);
-
-		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
-		{
-			if (entry->localrelid == reloid)
-			{
-				entry->entry_valid = false;
-				hash_seq_term(&status);
-				break;
-			}
-		}
-	}
-	else
-	{
-		/* invalidate all entries */
-		hash_seq_init(&status, sequences_to_copy);
-		while ((entry = (LogicalRepSequenceInfo *) hash_seq_search(&status)) != NULL)
-			entry->entry_valid = false;
-	}
-}
-
-static uint32
-LogicalRepSeqHash(const void *key, Size keysize)
-{
-	const LogicalRepSeqHashKey *k = (const LogicalRepSeqHashKey *) key;
-	uint32		h1 = string_hash(k->nspname, strlen(k->nspname));
-	uint32		h2 = string_hash(k->seqname, strlen(k->seqname));
-
-	return h1 ^ h2;
-}
-
-static int
-LogicalRepSeqMatchFunc(const void *key1, const void *key2, Size keysize)
-{
-	int			cmp;
-	const LogicalRepSeqHashKey *k1 = (const LogicalRepSeqHashKey *) key1;
-	const LogicalRepSeqHashKey *k2 = (const LogicalRepSeqHashKey *) key2;
-
-	/* Compare by namespace name first */
-	cmp = strcmp(k1->nspname, k2->nspname);
-	if (cmp != 0)
-		return cmp;
-
-	/* If namespace names are equal, compare by sequence name */
-	return strcmp(k1->seqname, k2->seqname);
-}
-
 /*
  * Start syncing the sequences in the sequencesync worker.
  */
@@ -633,21 +548,7 @@ LogicalRepSyncSequences(void)
 	SysScanDesc scan;
 	Oid			subid = MyLogicalRepWorker->subid;
 	StringInfoData app_name;
-	HASHCTL		ctl;
-	bool		found;
-	LogicalRepSequenceInfo *seq_entry;
-
-	ctl.keysize = sizeof(LogicalRepSeqHashKey);
-	ctl.entrysize = sizeof(LogicalRepSequenceInfo);
-	ctl.hcxt = CacheMemoryContext;
-	ctl.hash = LogicalRepSeqHash;
-	ctl.match = LogicalRepSeqMatchFunc;
-	sequences_to_copy = hash_create("Logical replication sequences", 256, &ctl,
-									HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_CONTEXT);
-
-	/* Watch for invalidation events. */
-	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
-								  (Datum) 0);
+	List	   *seqinfos = NIL;
 
 	StartTransactionCommand();
 
@@ -665,12 +566,12 @@ LogicalRepSyncSequences(void)
 
 	scan = systable_beginscan(rel, InvalidOid, false,
 							  NULL, 2, skey);
+
 	while (HeapTupleIsValid(tup = systable_getnext(scan)))
 	{
 		Form_pg_subscription_rel subrel;
 		char		relkind;
-		Relation	sequence_rel;
-		LogicalRepSeqHashKey key;
+		LogicalRepSequenceInfo *seq;
 		MemoryContext oldctx;
 
 		CHECK_FOR_INTERRUPTS();
@@ -682,32 +583,17 @@ LogicalRepSyncSequences(void)
 		if (relkind != RELKIND_SEQUENCE)
 			continue;
 
-		/* Skip if sequence was dropped concurrently */
-		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
-		if (!sequence_rel)
-			continue;
-
-		key.seqname = RelationGetRelationName(sequence_rel);
-		key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
-
-		/* Allocate the tracking info in a permanent memory context. */
-		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-
-		seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
-		Assert(!found);
-
-		memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+		/*
+		 * Worker needs to process sequences across transaction boundary, so
+		 * allocate them under long-lived context.
+		 */
+		oldctx = MemoryContextSwitchTo(TopMemoryContext);
 
-		seq_entry->seqname = pstrdup(key.seqname);
-		seq_entry->nspname = pstrdup(key.nspname);
-		seq_entry->localrelid = subrel->srrelid;
-		seq_entry->remote_seq_queried = false;
-		seq_entry->seqowner = sequence_rel->rd_rel->relowner;
-		seq_entry->entry_valid = true;
+		seq = palloc0_object(LogicalRepSequenceInfo);
+		seq->localrelid = subrel->srrelid;
+		seqinfos = lappend(seqinfos, seq);
 
 		MemoryContextSwitchTo(oldctx);
-
-		table_close(sequence_rel, RowExclusiveLock);
 	}
 
 	/* Cleanup */
@@ -716,6 +602,9 @@ LogicalRepSyncSequences(void)
 
 	CommitTransactionCommand();
 
+	if (!seqinfos)
+		return;
+
 	/* Is the use of a password mandatory? */
 	must_use_password = MySubscription->passwordrequired &&
 		!MySubscription->ownersuperuser;
@@ -739,9 +628,7 @@ LogicalRepSyncSequences(void)
 
 	pfree(app_name.data);
 
-	/* If there are any sequences that need to be copied */
-	if (hash_get_num_entries(sequences_to_copy))
-		copy_sequences(LogRepWorkerWalRcvConn, subid);
+	copy_sequences(LogRepWorkerWalRcvConn, seqinfos);
 }
 
 /*
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 49d1dd724bd..b3f5e1415e2 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -96,9 +96,8 @@ typedef struct LogicalRepSequenceInfo
 	char	   *seqname;
 	char	   *nspname;
 	Oid			localrelid;
-	bool		remote_seq_queried;
 	Oid			seqowner;
-	bool		entry_valid;
+	bool		found_on_pub;
 } LogicalRepSequenceInfo;
 
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
-- 
2.43.0

#446vignesh C
vignesh21@gmail.com
In reply to: Chao Li (#441)
Re: Logical Replication of sequences

On Tue, 28 Oct 2025 at 07:17, Chao Li <li.evan.chao@gmail.com> wrote:

On Oct 27, 2025, at 17:11, Chao Li <li.evan.chao@gmail.com> wrote:

The changes in 0001 are straightforward, looks good. I haven’t reviewed 0004 yet.

Comments for 0004:

1 - config.sgml
```
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
```

* “a failing replication apply worker” sounds a bit redundant, maybe change to “a failed apply worker”
* “will be respawned” works, but in formal documentation, I think “is respawned” is better

I felt this was documented that way in the HEAD too, I prefer the one in HEAD.

2 - logic-replication.sgml
```
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the current state of
+   sequences may be synchronized at any time. For more information, refer to
+   <xref linkend="logical-replication-sequences"/>.
```

* “may be” better to be “can be”
* I think the first sentence can be slightly enhanced as "Unlike tables, the state of a sequence can be synchronized at any time.”
* “refer to” should be “see” in PG docs. You can see right the next paragraph just uses “see”:
```
<command>TRUNCATE</command>. See <xref linkend="logical-replication-row-filter"/>).
```

Modified

3 - logic-replication.sgml
```
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   at the subscriber side:
```

“At the subscriber side” is better to be “on the subscriber”. Actually, you also use “on the subscriber” in the following paragraphs.

Modified

4 - logic-replication.sgml
```
During sequence synchronization, the sequence definitions of the publisher
and the subscriber are compared. An ERROR is logged listing all differing
sequences before the process exits. The apply worker detects this failure
and repeatedly respawns the sequence synchronization worker to continue
the synchronization process until all differences are resolved. See also
```

* “An ERROR” => “An error”. If you search for the current doc, “error” are all in lower case.
* " the sequence synchronization worker to continue the synchronization process”, the second “synchronization” sounds redundant, maybe enhance to "the sequence synchronization worker to retry"

Modified

5 - logic-replication.sgml
```
During sequence synchronization, if a sequence is dropped on the
publisher, the sequence synchronization worker will identify this and
remove it from sequence synchronization on the subscriber.
```

“Will identify this” => “detects the change”, I think PG docs usually prefer more direct phrasing.

This behavior has been changed now, I have removed it.

These changes are available in the v20251029 version posted at [1]/messages/by-id/CAHut+PtMc1fr6cQvUAnxRE+buim5m-d9M2dM0YAeEHNkS9KzBw@mail.gmail.com.
[1]: /messages/by-id/CAHut+PtMc1fr6cQvUAnxRE+buim5m-d9M2dM0YAeEHNkS9KzBw@mail.gmail.com

Regards,
Vignesh

#447Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#445)
Re: Logical Replication of sequences

Hi Vignesh,

Some more review comments for v20251029-0001.

======
.../replication/logical/sequencesync.c

1.
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.

I did not see anything in this header comment that says there is
currently no more than 1 sequencesync worker. The above part of the
comment doesn't make that clear.

~~~

2.
+ * Handle sequence synchronization cooperation from the apply worker.

Is it simpler just to say:
Apply worker determines if sequence synchronization is needed.

~~~

report_error_sequences:

3.
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs,
+    StringInfo missing_seqs)

Function name seems strange. How about 'ereport_sequence_errors'?

~~~

4.
+ if (mismatched_seqs->len)
+ {
+ if (insuffperm_seqs->len)
+ {
+ appendStringInfo(combined_error_detail, "; mismatched sequence(s) on
subscriber: (%s)",
+ mismatched_seqs->data);
+ appendStringInfoString(combined_error_hint, " For mismatched
sequences, alter or re-create local sequences to match the publisher's
parameters.");
+ }
+ else
+ {
+ appendStringInfo(combined_error_detail, "Mismatched sequence(s) on
subscriber: (%s)",
+ mismatched_seqs->data);
+ appendStringInfoString(combined_error_hint, "For mismatched
sequences, alter or re-create local sequences to match the publisher's
parameters.");
+ }
+ }
+
+ if (missing_seqs->len)
+ {
+ if (insuffperm_seqs->len || mismatched_seqs->len)
+ {
+ appendStringInfo(combined_error_detail, "; missing sequence(s) on
publisher: (%s)",
+ missing_seqs->data);
+ appendStringInfoString(combined_error_hint, " For missing sequences,
remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION
to refresh the subscription.");
+ }
+ else
+ {
+ appendStringInfo(combined_error_detail, "Missing sequence(s) on
publisher: (%s)",
+ missing_seqs->data);
+ appendStringInfoString(combined_error_hint, "For missing sequences,
remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION
to refresh the subscription.");
+ }
+ }

This logic has a lot of duplication just to handle the separate
multiple details/hints. I think it can be simplified a lot.

SUGGESTION
if (mismatched_seqs->len)
{
if (combined_error_detail->len)
{
appendStringInfo(combined_error_detail, "; ");
appendStringInfoChar(combined_error_hint, ' ');
}
appendStringInfo(combined_error_detail, "Mismatched sequence(s) ...);
appendStringInfoString(combined_error_hint, "For mismatched sequences, ...);
}
if (missing_seqs->len)
{
if (combined_error_detail->len)
{
appendStringInfo(combined_error_detail, "; ");
appendStringInfoChar(combined_error_hint, ' ');
}
appendStringInfo(combined_error_detail, "Missing sequence(s) on ...);
appendStringInfoString(combined_error_hint, "For missing sequences,
remove ...);
}

~~~

get_remote_sequence_info:

5.
+static void
+get_remote_sequence_info(TupleTableSlot *slot, LogicalRepSeqHashKey *key,
+ int64 *last_value, bool *is_called,
+ XLogRecPtr *page_lsn, Oid *remote_typid,
+ int64 *remote_start, int64 *remote_increment,
+ int64 *remote_min, int64 *remote_max,
+ bool *remote_cycle)

I felt this code might be better if you would introduce a new
structure (or add to an existing one?) to hold all the members instead
of declaring a dozen variables and passing them as parameters. So, a
cleaner function signature here might be like:

static SequenceInfo get_remote_sequence_info(TupleTableSlot *slot).

This may also allow you to simplify other code that passes so many
members as parameters -- e.g. also the validation function.

~~~
validate_sequence:

6.
+ /* Sequence was concurrently dropped */
+ if (!sequence_rel)
+ return COPYSEQ_SKIPPED;
+
+ /* Sequence was concurrently dropped */
+ tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+ if (!HeapTupleIsValid(tup))
+ return COPYSEQ_SKIPPED;
+
+ /* Sequence was concurrently invalidated */
+ if (!seqinfo->entry_valid)
+ {
+ ReleaseSysCache(tup);
+ return COPYSEQ_SKIPPED;
+ }

6a.
All those comments might be better written as questions. e.g.
/* Sequence was concurrently dropped? */
/* Sequence was concurrently dropped? */
/* Sequence was concurrently invalidated? */

~

6b.
+ /* Sequence was concurrently dropped */
+ tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+ if (!HeapTupleIsValid(tup))
+ return COPYSEQ_SKIPPED;

I think the 2nd comment belongs after the tup assignment.

~

6c.
+ local_seq = (Form_pg_sequence) GETSTRUCT(tup);
+ if (local_seq->seqtypid != remote_typid ||
+ local_seq->seqstart != remote_start ||
+ local_seq->seqincrement != remote_increment ||
+ local_seq->seqmin != remote_min ||
+ local_seq->seqmax != remote_max ||
+ local_seq->seqcycle != remote_cycle)
+ result = COPYSEQ_MISMATCH;

This should have a comment like the others. e.g.
/* Sequence parameters for remote/local are the same? */

~~~

copy_sequence:

7.
+/*
+ * Apply remote sequence state to local sequence and mark it as synchronized.
+ */

Is it better to explicitly name the state here too? e.g.

/and mark it as synchronized/and mark it as synchronized (READY)/

~~~

8.
+ /*
+ * Make sure that the sequence is copied as table owner, unless the user
+ * has opted out of that behaviour.
+ */
+ if (!MySubscription->runasowner)
+ SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);

8a.
Is "table owner" the correct term here?

~

8b.
Why not use the 'run_as_owner' variable?

~~~

copy_sequences:

9.
+ ereport(ERROR,
+ errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not receive list of sequence information from the
publisher: %s",
+    res->err));

How about just:
"could not fetch sequence information from the publisher: %s",

~~~

10.
+ CopySeqResult result;

'result' seems a slightly meaningless variable name since this is not
even returned from the function.

~~~

sequencesync_list_invalidate_cb:

11.
+ if (reloid != InvalidOid)

Use macro !OidIsValid(reloid)

~

12.
+ hash_seq_init(&status, sequences_to_copy);
+

This init is common for the if/else so could be done outside.

~~~

LogicalRepSeqMatchFunc:

13.
+ /* Compare by namespace name first */
+ cmp = strcmp(k1->nspname, k2->nspname);
+ if (cmp != 0)
+ return cmp;
+
+ /* If namespace names are equal, compare by sequence name */
+ return strcmp(k1->seqname, k2->seqname);

A simpler way to write this comparator might look like:

/* Compare by namespace name first, then by sequence name */
cmp = strcmp(k1->nspname, k2->nspname);
if (cmp == 0)
cmp = strcmp(k1->seqname, k2->seqname);

return cmp;

~~~

LogicalRepSyncSequences:

14.
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)

Might need a different function comment. This one seems almost the
same as the function comment for copy_sequences().

~~~

15.
+ key.seqname = RelationGetRelationName(sequence_rel);
+ key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+ /* Allocate the tracking info in a permanent memory context. */
+ oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+ seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+ Assert(!found);
+
+ memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+ seq_entry->localrelid = subrel->srrelid;
+ seq_entry->remote_seq_queried = false;
+ seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+ seq_entry->entry_valid = true;

Nit. It seems more natural to put the nspname before the seqname.

So,
key.nspname = ...
key.seqname = ...

and
seq_entry->nspname = pstrdup(key.nspname);
seq_entry->seqname = pstrdup(key.seqname);

~~~

LogicalRepSyncSequences:

16.
+ /* If there are any sequences that need to be copied */
+ if (hash_get_num_entries(sequences_to_copy))
+ copy_sequences(LogRepWorkerWalRcvConn, subid);

If there are no sequences to copy then AFAICT (from
SequenceSyncWorkerMain) this sequencesync worker is just going to just
stop and exit isn't it? So, why didn't we check this earlier and
perhaps could have avoided making a publisher connection
LogRepWorkerWalRcvConn for no reason?

======
src/backend/replication/logical/syncutils.c

FinishSyncWorker:

17.
+ if (am_tablesync_worker())
+ ereport(LOG,...));
+ else
+ ereport(LOG,...);
+ if (am_sequencesync_worker())
+ {
...
+ }
+ else
+ {
...
+ }

Using different ordering for these if/else conditions makes the logic
appear more complex. The second condition should be rearranged to be
same order as the first: if (am_tablesync_worker()) ... else ...;

~~~

launch_sync_worker:

18.
+ (void) logicalrep_worker_launch((OidIsValid(relid)) ?
WORKERTYPE_TABLESYNC : WORKERTYPE_SEQUENCESYNC,
+ MyLogicalRepWorker->dbid,
+ MySubscription->oid,
+ MySubscription->name,
+ MyLogicalRepWorker->userid,
+ relid, DSM_HANDLE_INVALID, false);

I noticed you chose not to pass 'wtype' as a parameter here, but
instead do the ternary to figure out the wtype. AFAICT the caller
already knows it, so why not just pass it in, instead of figuring it
out again?

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#448Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#445)
Re: Logical Replication of sequences

Hi Vignesh,

Some review comments patch V20251029-0001 (the test code only)

======
src/test/subscription/t/036_sequences.pl

1.
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ CREATE SEQUENCE regress_s4;
+ INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM
generate_series(1,100);
+));
+

AFAICT the sequence `regress_s3` (from the previous test part) was
already a "new sequence" that had not yet been REFRESHED to the
subscriber. So I think maybe there wasn't any need to create another
sequence `regress_s4` for this test part.

~~~

2.
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT last_value, log_cnt, is_called FROM regress_s4;
+));

Maybe that comment can give more details:
# Check - newly published sequence values are not updated when (copy_data = off)

~~~

3.
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should report an error when:
+# a) sequence definitions differ between the publisher and subscriber, or
+# b) a sequence is missing on the publisher.
+##########

OK, you have these mismatch parameters and missing sequences test for
"REFRESH PUBLICATION", but what about doing the same tests for
"REFRESH SEQUENCES" -- e,g, I am thinking you can ALTER/DROP some
publication that previously had synchronized OK, to verify what
happens during "REFRESH SEQUENCES".

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#449shveta malik
shveta.malik@gmail.com
In reply to: Peter Smith (#448)
Re: Logical Replication of sequences

Please find few trivial comments on 001:

1)
+ * The sequencesync worker is responsible for synchronizing sequences marked in
+ * pg_subscription_rel.

Shall we tweak it slightly to say:
'A single sequencesync worker is responsible for synchronizing all
sequences marked in pg_subscription_rel.'

I feel the fact that there is 'single seq-sync worker' is important to
mention in the file-header.

2)
sequencesync.c compiles without this:

#include "replication/logicallauncher.h"

3)
Can we improve FetchRelationStates() slightly? Currently for sequence,
it has an output parameter but for tables, it has return value, which
looks odd to me.

4)
AllTablesyncsReady() has changed the name of the variable from
has_subrels to has_tables which looks better. Do we need a similar
change in HasSubscriptionTablesCached as well?

5)
+is($result, '100|0|t', 'REFRESH PUBLICATION does not sync existing sequence');

+is($result, '1|0|f',
+ 'REFRESH SEQUENCES will not sync newly published sequence');

One has 'does not' while the other has 'will not'. Can we make both the same?

thanks
Shveta

#450Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: vignesh C (#445)
RE: Logical Replication of sequences

On Tuesday, October 28, 2025 8:30 PM vignesh C <vignesh21@gmail.com> wrote:

Peter's comments from [1] have also been addressed. The attached
v20251029 version patch has the changes for the same.

Thanks for updating the patch. I have few comments on 0001:

1.

+	/*
+	 * Record the remote sequence’s LSN in pg_subscription_rel and mark the

There is a problem with the encoding of the single quote here.

2.

+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)

The macro is unused.

3.

-FinishSyncWorker(void)
+FinishSyncWorker()

Is this change necessary ?

4.

+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(WORKERTYPE_APPLY,
+									MyLogicalRepWorker->subid, InvalidOid,
+									true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);

I think we should take Exclusive lock here because we are modifying
the worker data.

Best Regards,
Hou zj

#451Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#445)
Re: Logical Replication of sequences

On Tue, Oct 28, 2025 at 6:00 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 27 Oct 2025 at 14:42, Chao Li <li.evan.chao@gmail.com> wrote:

On Oct 24, 2025, at 23:22, vignesh C <vignesh21@gmail.com> wrote:

Regards,
Vignesh

<v20251024-0001-Rename-sync_error_count-to-tbl_sync_error_.patch><v20251024-0002-Add-worker-type-argument-to-logicalrep_wor.patch><v20251024-0003-New-worker-for-sequence-synchronization-du.patch><v20251024-0004-Documentation-for-sequence-synchronization.patch>

The changes in 0001 are straightforward, looks good. I haven’t reviewed

0004 yet. Got a few comments for 0002 and 0003.

5 - 0003
```
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+       LogicalRepWorker *worker;
+
+       LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+       /*
+        * Set the last_seqsync_start_time for the sequence worker in

the apply

+ * worker instead of the sequence sync worker, as the sequence

sync worker

+        * has finished and is about to exit.
+        */
+       worker = logicalrep_worker_find(MyLogicalRepWorker->subid,

InvalidOid,

+

WORKERTYPE_APPLY, true);

+       if (worker)
+               worker->last_seqsync_start_time = 0;
+
+       LWLockRelease(LogicalRepWorkerLock);
+}
```

Two comments for this new function:

* The function comment and in-code comment are redundant. Suggesting

move the in-code comment to function comment.

* Why LW_SHARED is used? We are writing worker->last_seqsync_start_time,

shouldn’t LW_EXCLUSIVE be used?

There will be only one sequence sync worker and only this process is
going to update this, so LW_SHARED is enough to find the apply worker.

6 - 0003
```
+       /*
+        * Count running sync workers for this subscription, while we

have the

+        * lock.
+        */
+       nsyncworkers =

logicalrep_sync_worker_count(MyLogicalRepWorker->subid);

+       LWLockRelease(LogicalRepWorkerLock);
+
+       launch_sync_worker(nsyncworkers, InvalidOid,
+

&MyLogicalRepWorker->last_seqsync_start_time);

```

I think here could be a race condition. Because the lock is acquired in

LW_SHARED, meaning multiple caller may get the same nsyncworkers. Then it
launches sync worker based on nsyncworkers, which would use inaccurate
nsyncworkers, because between LWLockRelease() and launch_sync_worker(),
another worker might be started.

But if that is not the case, only one caller should call

ProcessSyncingSequencesForApply(), then why the lock is needed?

Sequence sync worker will be started only by the apply worker, another
worker cannot be started for this subscription between LWLockRelease()
and launch_sync_worker() as this apply worker is responsible for it
and the apply worker is active with current work. Same logic is used
for table sync workers too.

7 - 0003
```
+       if (insuffperm_seqs->len)
+       {
+               appendStringInfo(combined_error_detail, "Insufficient

permission for sequence(s): (%s)",

+                                                insuffperm_seqs->data);
+               appendStringInfoString(combined_error_hint, "Grant

permissions for the sequence(s).");

+ }
```

“Grant permissions” is unclear. Should it be “Grant UPDATE privilege”?

Modified

8 - 0003
```
+ appendStringInfoString(combined_error_hint, "

For mismatched sequences, alter or re-create local sequences to have
matching parameters as publishers.");

```

“To have matching parameters as publishers” grammatically sound not

good. Maybe revision to “to match the publisher’s parameters”.

Modified

9 - 0003
```
+               /*
+                * current_indexes is not incremented sequentially

because some

+ * sequences may be missing, and the number of fetched

rows may not

+ * match the batch size. The `hash_search` with

HASH_REMOVE takes care

+                * of the count.
+                */
```

Typo: current_indexes => current_index

Modified

10 - 0003
```
-       /* Find the leader apply worker and signal it. */
-       logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
+       /*
+        * This is a clean exit of the sequencesync worker; reset the
+        * last_seqsync_start_time.
+        */
+       if (wtype == WORKERTYPE_SEQUENCESYNC)
+               logicalrep_reset_seqsync_start_time();
+       else
+               /* Find the leader apply worker and signal it. */
+               logicalrep_worker_wakeup(MyLogicalRepWorker->subid,

InvalidOid);

```

The comment “this is a clean exist of sequencsync worker” is specific to

“if”, so suggesting moving into “if”. And “this is a clean exis of the
sequencesyc worker” is not needed, keep consistent with the comment in
“else”.

Modified

11 - 0003
```
+void
+launch_sync_worker(int nsyncworkers, Oid relid, TimestampTz

*last_start_time)

+{
+       /* If there is a free sync worker slot, start a new sync worker

*/

+       if (nsyncworkers < max_sync_workers_per_subscription)
+       {
```

The entire function is under an “if”, so we can do “if (!…) return”, so

saves a level of indent.

Modified

Peter's comments from [1] have also been addressed. The attached
v20251029 version patch has the changes for the same.

[1] -
/messages/by-id/CAHut+PtMc1fr6cQvUAnxRE+buim5m-d9M2dM0YAeEHNkS9KzBw@mail.gmail.com

Are you planning to merge 0001 and 0002, I don't think first we want to
commit the solution with a hash and then commit with the list. So for
review also we better merge these 2 so that we don't need to review the
changes with hash as we are not going to commit that change if we agree
that we want to go ahead with the list.

--
Regards,
Dilip Kumar
Google

#452vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#447)
3 attachment(s)
Re: Logical Replication of sequences

On Wed, 29 Oct 2025 at 08:17, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Some more review comments for v20251029-0001.

======
.../replication/logical/sequencesync.c

1.
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a
+ * sequencesync worker to handle synchronization.

I did not see anything in this header comment that says there is
currently no more than 1 sequencesync worker. The above part of the
comment doesn't make that clear.

Modified

~~~

2.
+ * Handle sequence synchronization cooperation from the apply worker.

Is it simpler just to say:
Apply worker determines if sequence synchronization is needed.

Modified

~~~

report_error_sequences:

3.
+static void
+report_error_sequences(StringInfo insuffperm_seqs, StringInfo mismatched_seqs,
+    StringInfo missing_seqs)

Function name seems strange. How about 'ereport_sequence_errors'?

I preferred report_sequence_errors over ereport_sequence_errors.

~~~

4.
+ if (mismatched_seqs->len)
+ {
+ if (insuffperm_seqs->len)
+ {
+ appendStringInfo(combined_error_detail, "; mismatched sequence(s) on
subscriber: (%s)",
+ mismatched_seqs->data);
+ appendStringInfoString(combined_error_hint, " For mismatched
sequences, alter or re-create local sequences to match the publisher's
parameters.");
+ }
+ else
+ {
+ appendStringInfo(combined_error_detail, "Mismatched sequence(s) on
subscriber: (%s)",
+ mismatched_seqs->data);
+ appendStringInfoString(combined_error_hint, "For mismatched
sequences, alter or re-create local sequences to match the publisher's
parameters.");
+ }
+ }
+
+ if (missing_seqs->len)
+ {
+ if (insuffperm_seqs->len || mismatched_seqs->len)
+ {
+ appendStringInfo(combined_error_detail, "; missing sequence(s) on
publisher: (%s)",
+ missing_seqs->data);
+ appendStringInfoString(combined_error_hint, " For missing sequences,
remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION
to refresh the subscription.");
+ }
+ else
+ {
+ appendStringInfo(combined_error_detail, "Missing sequence(s) on
publisher: (%s)",
+ missing_seqs->data);
+ appendStringInfoString(combined_error_hint, "For missing sequences,
remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION
to refresh the subscription.");
+ }
+ }

This logic has a lot of duplication just to handle the separate
multiple details/hints. I think it can be simplified a lot.

SUGGESTION
if (mismatched_seqs->len)
{
if (combined_error_detail->len)
{
appendStringInfo(combined_error_detail, "; ");
appendStringInfoChar(combined_error_hint, ' ');
}
appendStringInfo(combined_error_detail, "Mismatched sequence(s) ...);
appendStringInfoString(combined_error_hint, "For mismatched sequences, ...);
}
if (missing_seqs->len)
{
if (combined_error_detail->len)
{
appendStringInfo(combined_error_detail, "; ");
appendStringInfoChar(combined_error_hint, ' ');
}
appendStringInfo(combined_error_detail, "Missing sequence(s) on ...);
appendStringInfoString(combined_error_hint, "For missing sequences,
remove ...);
}

Modified

~~~

get_remote_sequence_info:

5.
+static void
+get_remote_sequence_info(TupleTableSlot *slot, LogicalRepSeqHashKey *key,
+ int64 *last_value, bool *is_called,
+ XLogRecPtr *page_lsn, Oid *remote_typid,
+ int64 *remote_start, int64 *remote_increment,
+ int64 *remote_min, int64 *remote_max,
+ bool *remote_cycle)

I felt this code might be better if you would introduce a new
structure (or add to an existing one?) to hold all the members instead
of declaring a dozen variables and passing them as parameters. So, a
cleaner function signature here might be like:

static SequenceInfo get_remote_sequence_info(TupleTableSlot *slot).

This may also allow you to simplify other code that passes so many
members as parameters -- e.g. also the validation function.

I will take this in the next version or little later.

~~~
validate_sequence:

6.
+ /* Sequence was concurrently dropped */
+ if (!sequence_rel)
+ return COPYSEQ_SKIPPED;
+
+ /* Sequence was concurrently dropped */
+ tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+ if (!HeapTupleIsValid(tup))
+ return COPYSEQ_SKIPPED;
+
+ /* Sequence was concurrently invalidated */
+ if (!seqinfo->entry_valid)
+ {
+ ReleaseSysCache(tup);
+ return COPYSEQ_SKIPPED;
+ }

6a.
All those comments might be better written as questions. e.g.
/* Sequence was concurrently dropped? */
/* Sequence was concurrently dropped? */
/* Sequence was concurrently invalidated? */

Modified

~

6b.
+ /* Sequence was concurrently dropped */
+ tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+ if (!HeapTupleIsValid(tup))
+ return COPYSEQ_SKIPPED;

I think the 2nd comment belongs after the tup assignment.

Modified

~

6c.
+ local_seq = (Form_pg_sequence) GETSTRUCT(tup);
+ if (local_seq->seqtypid != remote_typid ||
+ local_seq->seqstart != remote_start ||
+ local_seq->seqincrement != remote_increment ||
+ local_seq->seqmin != remote_min ||
+ local_seq->seqmax != remote_max ||
+ local_seq->seqcycle != remote_cycle)
+ result = COPYSEQ_MISMATCH;

This should have a comment like the others. e.g.
/* Sequence parameters for remote/local are the same? */

Modified

~~~

copy_sequence:

7.
+/*
+ * Apply remote sequence state to local sequence and mark it as synchronized.
+ */

Is it better to explicitly name the state here too? e.g.

/and mark it as synchronized/and mark it as synchronized (READY)/

Modified

~~~

8.
+ /*
+ * Make sure that the sequence is copied as table owner, unless the user
+ * has opted out of that behaviour.
+ */
+ if (!MySubscription->runasowner)
+ SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);

8a.
Is "table owner" the correct term here?

Changed it to sequence owner

~

8b.
Why not use the 'run_as_owner' variable?

modified

~~~

copy_sequences:

9.
+ ereport(ERROR,
+ errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not receive list of sequence information from the
publisher: %s",
+    res->err));

How about just:
"could not fetch sequence information from the publisher: %s",

modified

~~~

10.
+ CopySeqResult result;

'result' seems a slightly meaningless variable name since this is not
even returned from the function.

Changed it

~~~

sequencesync_list_invalidate_cb:

11.
+ if (reloid != InvalidOid)

Use macro !OidIsValid(reloid)

Modified

~

12.
+ hash_seq_init(&status, sequences_to_copy);
+

This init is common for the if/else so could be done outside.

This code is removed now

~~~

LogicalRepSeqMatchFunc:

13.
+ /* Compare by namespace name first */
+ cmp = strcmp(k1->nspname, k2->nspname);
+ if (cmp != 0)
+ return cmp;
+
+ /* If namespace names are equal, compare by sequence name */
+ return strcmp(k1->seqname, k2->seqname);

A simpler way to write this comparator might look like:

/* Compare by namespace name first, then by sequence name */
cmp = strcmp(k1->nspname, k2->nspname);
if (cmp == 0)
cmp = strcmp(k1->seqname, k2->seqname);

return cmp;

This code is removed now

~~~

LogicalRepSyncSequences:

14.
+/*
+ * Start syncing the sequences in the sequencesync worker.
+ */
+static void
+LogicalRepSyncSequences(void)

Might need a different function comment. This one seems almost the
same as the function comment for copy_sequences().

Modified

~~~

15.
+ key.seqname = RelationGetRelationName(sequence_rel);
+ key.nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+
+ /* Allocate the tracking info in a permanent memory context. */
+ oldctx = MemoryContextSwitchTo(CacheMemoryContext);
+
+ seq_entry = hash_search(sequences_to_copy, &key, HASH_ENTER, &found);
+ Assert(!found);
+
+ memset(seq_entry, 0, sizeof(LogicalRepSequenceInfo));
+
+ seq_entry->localrelid = subrel->srrelid;
+ seq_entry->remote_seq_queried = false;
+ seq_entry->seqowner = sequence_rel->rd_rel->relowner;
+ seq_entry->entry_valid = true;

Nit. It seems more natural to put the nspname before the seqname.

So,
key.nspname = ...
key.seqname = ...

and
seq_entry->nspname = pstrdup(key.nspname);
seq_entry->seqname = pstrdup(key.seqname);

This code has been removed, there was one more place where the
ordering was different, modified it there.

~~~

LogicalRepSyncSequences:

16.
+ /* If there are any sequences that need to be copied */
+ if (hash_get_num_entries(sequences_to_copy))
+ copy_sequences(LogRepWorkerWalRcvConn, subid);

If there are no sequences to copy then AFAICT (from
SequenceSyncWorkerMain) this sequencesync worker is just going to just
stop and exit isn't it? So, why didn't we check this earlier and
perhaps could have avoided making a publisher connection
LogRepWorkerWalRcvConn for no reason?

Modified

======

src/backend/replication/logical/syncutils.c

FinishSyncWorker:

17.
+ if (am_tablesync_worker())
+ ereport(LOG,...));
+ else
+ ereport(LOG,...);
+ if (am_sequencesync_worker())
+ {
...
+ }
+ else
+ {
...
+ }

Using different ordering for these if/else conditions makes the logic
appear more complex. The second condition should be rearranged to be
same order as the first: if (am_tablesync_worker()) ... else ...;

Modified

~~~

launch_sync_worker:

18.
+ (void) logicalrep_worker_launch((OidIsValid(relid)) ?
WORKERTYPE_TABLESYNC : WORKERTYPE_SEQUENCESYNC,
+ MyLogicalRepWorker->dbid,
+ MySubscription->oid,
+ MySubscription->name,
+ MyLogicalRepWorker->userid,
+ relid, DSM_HANDLE_INVALID, false);

I noticed you chose not to pass 'wtype' as a parameter here, but
instead do the ternary to figure out the wtype. AFAICT the caller
already knows it, so why not just pass it in, instead of figuring it
out again?

Modified

Shveta's comments from [1]/messages/by-id/CAJpy0uCkt4V95un1025xV+BoLOXg0DTk418Di_f6gerpuezBmA@mail.gmail.com have also been addressed in this version.
Dilip's merge suggestion from [2]/messages/by-id/CAFiTN-v1mm=wAMBVT82Ok9YrGG7o-wszxw4RHsmKk1oP+=rJnA@mail.gmail.com has also been addressed in this version.

The attached v20251029_2 version patch has the fixes for the same.

[1]: /messages/by-id/CAJpy0uCkt4V95un1025xV+BoLOXg0DTk418Di_f6gerpuezBmA@mail.gmail.com
[2]: /messages/by-id/CAFiTN-v1mm=wAMBVT82Ok9YrGG7o-wszxw4RHsmKk1oP+=rJnA@mail.gmail.com

Regards,
Vignesh

Attachments:

v20251029_2-0001-New-worker-for-sequence-synchronization-.patchtext/x-patch; charset=UTF-8; name=v20251029_2-0001-New-worker-for-sequence-synchronization-.patchDownload
From 9b3cf7ba3dd85994d384f8c10f365a959c1d27a7 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 15:31:13 +0530
Subject: [PATCH v20251029_2 1/3] New worker for sequence synchronization
 during subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (The command syntax remains unchanged from PG18 to PG19.)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (The command syntax remains unchanged from PG18 to PG19.)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - (A new command introduced in PG19 by a prior patch.)
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/commands/sequence.c               |  21 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  51 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 726 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 132 +++-
 src/backend/replication/logical/tablesync.c   |  69 +-
 src/backend/replication/logical/worker.c      |  69 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   2 +-
 src/include/catalog/pg_subscription_rel.h     |  13 +
 src/include/commands/sequence.h               |   1 +
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  24 +-
 src/test/subscription/t/036_sequences.pl      | 178 ++++-
 src/tools/pgindent/typedefs.list              |   2 +
 18 files changed, 1192 insertions(+), 110 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 15b233a37d8..1945627ed88 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index c23dee5231c..9d1dc87ceb1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,8 +953,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1056,7 +1055,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1064,14 +1063,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1080,7 +1079,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1798,7 +1797,8 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
  * Return the sequence tuple along with its page LSN.
  *
  * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * without needing to individually query each sequence relation. This will also
+ * be used by logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1842,6 +1842,11 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
+
+		/*
+		 * For details about recording the LSN, see the
+		 * UpdateSubscriptionRelState() call in copy_sequence().
+		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 95b5cae9a55..e2024680cf9 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -248,9 +248,10 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given worker type,
  * subscription id, and relation id.
  *
- * For apply workers, the relid should be set to InvalidOid, as they manage
- * changes across all tables. For table sync workers, the relid should be set
- * to the OID of the relation being synchronized.
+ * For apply workers and sequencesync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables. For tablesync
+ * workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
 logicalrep_worker_find(LogicalRepWorkerType wtype, Oid subid, Oid relid,
@@ -334,6 +335,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -422,7 +424,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -511,8 +514,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -848,6 +859,29 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ *
+ * Note that this value is not stored in the sequencesync worker, because that
+ * has finished already and is about to exit.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(WORKERTYPE_APPLY,
+									MyLogicalRepWorker->subid, InvalidOid,
+									true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -896,7 +930,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1610,7 +1644,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1650,6 +1684,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..7927aeac054
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,726 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state INIT, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a sequencesync worker
+ * to handle synchronization.
+ *
+ * A single sequencesync worker is responsible for synchronizing all sequences
+ * marked in pg_subscription_rel. It begins by retrieving the list of sequences
+ * flagged for synchronization. These sequences are then processed in batches,
+ * allowing multiple entries to be synchronized within a single transaction.
+ * The worker fetches the current sequence values and page LSNs from the remote
+ * publisher, updates the corresponding sequences on the local subscriber, and
+ * finally marks each sequence as READY upon successful synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: An alternative design was considered where the launcher process would
+ * periodically check for sequences that need syncing and then start the
+ * sequencesync worker. However, the approach of having the apply worker
+ * manage the sequencesync worker was chosen for the following reasons:
+ * a) The apply worker can access the sequences that need to be synchronized
+ *    from the pg_subscription_rel system catalog. Whereas the launcher process
+ *    operates without direct database access so would need a framework to
+ *    establish connections with the databases to retrieve the sequences for
+ *    synchronization.
+ * b) It utilizes the existing tablesync worker code to start the sequencesync
+ *    process, thus preventing code duplication in the launcher.
+ * c) It simplifies code maintenance by consolidating changes to a single
+ *    location rather than multiple components.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+#include "utils/pg_lsn.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 10
+
+typedef enum CopySeqResult
+{
+	COPYSEQ_SUCCESS,
+	COPYSEQ_MISMATCH,
+	COPYSEQ_INSUFFICIENT_PERM,
+	COPYSEQ_SKIPPED
+} CopySeqResult;
+
+static List *seqinfos = NIL;
+
+/*
+ * Apply worker determines if sequence synchronization is needed.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(NULL, &has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(WORKERTYPE_SEQUENCESYNC,
+												 MyLogicalRepWorker->subid,
+												 InvalidOid, true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(WORKERTYPE_SEQUENCESYNC, nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_sequence_errors
+ *
+ * Report discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_sequence_errors(StringInfo insuffperm_seqs, StringInfo mismatched_seqs,
+					   StringInfo missing_seqs)
+{
+	StringInfo	detail = makeStringInfo();
+	StringInfo	hint = makeStringInfo();
+	bool need_separator = false;
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(detail, "Insufficient privileges on the sequence(s): (%s)",
+						 insuffperm_seqs->data);
+		appendStringInfoString(hint, "Grant UPDATE privilege on the sequence(s).");
+		need_separator = true;
+	}
+
+	if (mismatched_seqs->len)
+	{
+		appendStringInfo(detail, "%sMismatched sequence(s) on subscriber: (%s)",
+						 need_separator ? "; " : "", mismatched_seqs->data);
+		appendStringInfo(hint, "%sFor mismatched sequences, alter or re-create local sequences to match the publisher's parameters.",
+						 need_separator ? " " : "");
+		need_separator = true;
+	}
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(detail, "%sMissing sequence(s) on publisher: (%s)",
+						 need_separator ? "; " : "", missing_seqs->data);
+		appendStringInfo(hint, "%sFor missing sequences, remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.",
+						 need_separator ? " " : "");
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s.", detail->data),
+			errhint("%s", hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+/*
+ * get_remote_sequence_info
+ *
+ * Extract remote sequence information from a tuple slot received from the
+ * publisher.
+ */
+static void
+get_remote_sequence_info(TupleTableSlot *slot, int *seqidx,
+						 int64 *last_value, bool *is_called,
+						 XLogRecPtr *page_lsn, Oid *remote_typid,
+						 int64 *remote_start, int64 *remote_increment,
+						 int64 *remote_min, int64 *remote_max,
+						 bool *remote_cycle)
+{
+	bool		isnull;
+	int			col = 0;
+
+	*seqidx = DatumGetInt32(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_typid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_start = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_increment = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_min = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_max = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*remote_cycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+}
+
+/*
+ * Compare sequence parameters from publisher with local sequence.
+ */
+static CopySeqResult
+validate_sequence(Relation sequence_rel, LogicalRepSequenceInfo *seqinfo,
+				  Oid remote_typid, int64 remote_start,
+				  int64 remote_increment, int64 remote_min,
+				  int64 remote_max, bool remote_cycle)
+{
+	Form_pg_sequence local_seq;
+	HeapTuple	tup;
+	CopySeqResult	result = COPYSEQ_SUCCESS;
+
+	/* Sequence was concurrently dropped? */
+	if (!sequence_rel)
+		return COPYSEQ_SKIPPED;
+
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
+
+	/* Sequence was concurrently dropped? */
+	if (!HeapTupleIsValid(tup))
+		return COPYSEQ_SKIPPED;
+
+	/* Sequence was concurrently invalidated? */
+	if (!seqinfo->entry_valid)
+	{
+		ReleaseSysCache(tup);
+		return COPYSEQ_SKIPPED;
+	}
+
+	local_seq = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Sequence parameters for remote/local are the same? */
+	if (local_seq->seqtypid != remote_typid ||
+		local_seq->seqstart != remote_start ||
+		local_seq->seqincrement != remote_increment ||
+		local_seq->seqmin != remote_min ||
+		local_seq->seqmax != remote_max ||
+		local_seq->seqcycle != remote_cycle)
+		result = COPYSEQ_MISMATCH;
+
+	ReleaseSysCache(tup);
+	return result;
+}
+
+/*
+ * Apply remote sequence state to local sequence and mark it as
+ * synchronized (READY).
+ */
+static CopySeqResult
+copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
+			  bool is_called, XLogRecPtr page_lsn)
+{
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+	Oid			seqoid = seqinfo->localrelid;
+
+	/*
+	 * Make sure that the sequence is copied as sequence owner, unless the user
+	 * has opted out of that behaviour.
+	 */
+	if (!run_as_owner)
+		SwitchToUntrustedUser(seqinfo->seqowner, &ucxt);
+
+	aclresult = pg_class_aclcheck(seqoid, GetUserId(), ACL_UPDATE);
+
+	if (aclresult != ACLCHECK_OK)
+	{
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		return COPYSEQ_INSUFFICIENT_PERM;
+	}
+
+	SetSequence(seqoid, last_value, is_called);
+
+	if (!run_as_owner)
+		RestoreUserContext(&ucxt);
+
+	/*
+	 * Record the remote sequence’s LSN in pg_subscription_rel and mark the
+	 * sequence as READY. The LSN represents the WAL position of the remote
+	 * sequence at the time it was synchronized.
+	 */
+	UpdateSubscriptionRelState(MySubscription->oid, seqoid,
+							   SUBREL_STATE_READY, page_lsn, false);
+
+	return COPYSEQ_SUCCESS;
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ */
+static void
+copy_sequences(WalReceiverConn *conn)
+{
+	int			current_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, list_length(seqinfos)));
+
+	while (current_index < list_length(seqinfos))
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+		Relation sequence_rel;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+		ListCell	   *lc;
+
+		StartTransactionCommand();
+
+		for_each_from(lc, seqinfos, current_index)
+		{
+			MemoryContext oldctx;
+			LogicalRepSequenceInfo *seqinfo = (LogicalRepSequenceInfo *) lfirst(lc);
+
+			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+
+			/* Skip if sequence was dropped concurrently */
+			if (!sequence_rel)
+			{
+				seqinfos = foreach_delete_current(seqinfos, lc);
+				continue;
+			}
+
+			/* Save sequence info */
+			oldctx = MemoryContextSwitchTo(TopMemoryContext);
+			seqinfo->nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+			seqinfo->seqname = pstrdup(RelationGetRelationName(sequence_rel));
+			seqinfo->seqowner = sequence_rel->rd_rel->relowner;
+			MemoryContextSwitchTo(oldctx);
+
+			table_close(sequence_rel, RowExclusiveLock);
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\', %d)",
+							 seqinfo->nspname, seqinfo->seqname,
+							 foreach_current_index(lc));
+
+			if (++batch_size == MAX_SEQUENCES_SYNC_PER_BATCH)
+				break;
+		}
+
+		appendStringInfo(cmd,
+						 "SELECT s.seqidx, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname, seqidx)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not fetch sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int64		last_value;
+			bool		is_called;
+			XLogRecPtr	page_lsn;
+			int			seqidx;
+			Oid			remote_typid;
+			int64		remote_start;
+			int64		remote_increment;
+			int64		remote_min;
+			int64		remote_max;
+			bool		remote_cycle;
+			CopySeqResult sync_status;
+			LogicalRepSequenceInfo *seqinfo;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			get_remote_sequence_info(slot, &seqidx, &last_value, &is_called,
+									 &page_lsn, &remote_typid, &remote_start,
+									 &remote_increment, &remote_min,
+									 &remote_max, &remote_cycle);
+
+			seqinfo = (LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);
+			seqinfo->found_on_pub = true;
+
+			sequence_rel = try_table_open(seqinfo->localrelid, RowExclusiveLock);
+
+			sync_status = validate_sequence(sequence_rel, seqinfo,
+											remote_typid, remote_start,
+											remote_increment, remote_min,
+											remote_max, remote_cycle);
+
+			if (sync_status == COPYSEQ_SUCCESS)
+				sync_status = copy_sequence(seqinfo, last_value, is_called,
+											page_lsn);
+
+			switch (sync_status)
+			{
+				case COPYSEQ_MISMATCH:
+					append_sequence_name(mismatched_seqs, seqinfo->nspname,
+										 seqinfo->seqname,
+										 &batch_mismatched_count);
+					break;
+				case COPYSEQ_INSUFFICIENT_PERM:
+					append_sequence_name(insuffperm_seqs, seqinfo->nspname,
+										 seqinfo->seqname,
+										 &batch_insuffperm_count);
+					break;
+				case COPYSEQ_SKIPPED:
+					ereport(LOG,
+							errmsg("skip synchronization of sequence \"%s.%s\" because it has been altered or dropped concurrently",
+								   seqinfo->nspname, seqinfo->seqname));
+					batch_skipped_count++;
+					break;
+				case COPYSEQ_SUCCESS:
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+					batch_succeeded_count++;
+					break;
+			}
+
+			if (sequence_rel)
+				table_close(sequence_rel, NoLock);
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+					   MySubscription->name, (current_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1, batch_size,
+					   batch_succeeded_count, batch_skipped_count, batch_mismatched_count, batch_insuffperm_count,
+					   batch_size - (batch_succeeded_count + batch_skipped_count + batch_mismatched_count + batch_insuffperm_count)));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		/*
+		 * current_index is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		current_index += batch_size;
+	}
+
+	foreach_ptr(LogicalRepSequenceInfo, seqinfo, seqinfos)
+		if (!seqinfo->found_on_pub)
+			append_sequence_name(missing_seqs, seqinfo->nspname,
+								 seqinfo->seqname, NULL);
+
+	/* Report permission issues, mismatches, or missing sequences */
+	if (insuffperm_seqs->len || mismatched_seqs->len || missing_seqs->len)
+		report_sequence_errors(insuffperm_seqs, mismatched_seqs, missing_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Relcache invalidation callback
+ */
+static void
+sequencesync_list_invalidate_cb(Datum arg, Oid reloid)
+{
+	/* Quick exit if no sequence is listed yet */
+	if (!seqinfos)
+		return;
+
+	if (!OidIsValid(reloid))
+	{
+		foreach_ptr(LogicalRepSequenceInfo, seqinfo, seqinfos)
+		{
+			if (seqinfo->localrelid == reloid)
+			{
+				seqinfo->entry_valid = false;
+				break;
+			}
+		}
+	}
+	else
+	{
+		/* invalidate all entries */
+		foreach_ptr(LogicalRepSequenceInfo, seqinfo, seqinfos)
+			seqinfo->entry_valid = false;
+	}
+}
+
+/*
+ * Determines which sequences require synchronization and initiates their
+ * synchronization process.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+
+	/* Watch for invalidation events. */
+	CacheRegisterRelcacheCallback(sequencesync_list_invalidate_cb,
+								  (Datum) 0);
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(SUBREL_STATE_INIT));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		LogicalRepSequenceInfo *seq;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/*
+		 * Worker needs to process sequences across transaction boundary, so
+		 * allocate them under long-lived context.
+		 */
+		oldctx = MemoryContextSwitchTo(TopMemoryContext);
+
+		seq = palloc0_object(LogicalRepSequenceInfo);
+		seq->localrelid = subrel->srrelid;
+		seq->entry_valid = true;
+		seqinfos = lappend(seqinfos, seq);
+
+		MemoryContextSwitchTo(oldctx);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	if (!seqinfos)
+		return;
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker();
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index ae8c9385916..54d6a46adce 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -46,8 +47,10 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
  * Exit routine for synchronization worker.
  */
 pg_noreturn void
-FinishSyncWorker(void)
+FinishSyncWorker()
 {
+	Assert(am_sequencesync_worker() || am_tablesync_worker());
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,15 +65,29 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (am_sequencesync_worker())
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
+	else
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
-							 InvalidOid);
+	if (am_sequencesync_worker())
+	{
+		/* Find the leader apply worker and reset last_seqsync_start_time. */
+		logicalrep_reset_seqsync_start_time();
+	}
+	else
+	{
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
+								 InvalidOid);
+	}
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -86,7 +103,49 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * wtype: sync worker type.
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequencesync worker, actual relid for tablesync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers, Oid relid,
+				   TimestampTz *last_start_time)
+{
+	TimestampTz now;
+
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers >= max_sync_workers_per_subscription)
+		return;
+
+	now = GetCurrentTimestamp();
+
+	if (!(*last_start_time) ||
+		TimestampDifferenceExceeds(*last_start_time, now,
+								   wal_retrieve_retry_interval))
+	{
+		/*
+		 * Set the last_start_time even if we fail to start the worker, so that
+		 * we won't retry until wal_retrieve_retry_interval has elapsed.
+		 */
+		*last_start_time = now;
+		(void) logicalrep_worker_launch(wtype,
+										MyLogicalRepWorker->dbid,
+										MySubscription->oid,
+										MySubscription->name,
+										MyLogicalRepWorker->userid,
+										relid, DSM_HANDLE_INVALID, false);
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -108,6 +167,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -117,17 +182,29 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-ready tables and non-ready sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * has_pending_subtables: true if the subscription has one or more tables that
+ * are not in ready state, otherwise false.
+ * has_pending_subsequences: true if the subscription has one or more sequences
+ * that are not in ready state, otherwise false.
  */
-bool
-FetchRelationStates(bool *started_tx)
+void
+FetchRelationStates(bool *has_pending_subtables,
+					bool *has_pending_subsequences,
+					bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready are declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
 	*started_tx = false;
 
@@ -139,6 +216,7 @@ FetchRelationStates(bool *started_tx)
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -150,17 +228,25 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		/* Fetch tables and sequences that are in non-ready state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
 		foreach(lc, rstates)
 		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+			SubscriptionRelState *subrel = (SubscriptionRelState *) lfirst(lc);
+
+			if (get_rel_relkind(subrel->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+			{
+				rstate = palloc(sizeof(SubscriptionRelState));
+				memcpy(rstate, subrel, sizeof(SubscriptionRelState));
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
+			}
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -185,5 +271,9 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
-	return has_subtables;
+	if (has_pending_subtables)
+		*has_pending_subtables = has_subtables;
+
+	if (has_pending_subsequences)
+		*has_pending_subsequences = has_subsequences_non_ready;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 58c98488d7b..03e3e490e1c 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -381,7 +381,7 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(NULL, NULL, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -415,6 +415,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -428,11 +436,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -552,43 +555,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(WORKERTYPE_TABLESYNC, nsyncworkers,
+								   rstate->relid, &hentry->last_start_time);
 			}
 		}
 	}
@@ -1596,7 +1575,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1619,10 +1598,10 @@ bool
 AllTablesyncsReady(void)
 {
 	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_tables, NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1634,7 +1613,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1649,10 +1628,10 @@ bool
 HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
-	bool		has_subrels;
+	bool		has_tables;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_tables, NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1660,7 +1639,7 @@ HasSubscriptionTablesCached(void)
 		pgstat_report_stat(true);
 	}
 
-	return has_subrels;
+	return has_tables;
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 7edd1c9cf06..578bfd9d778 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1243,7 +1248,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1365,7 +1373,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1421,7 +1432,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1487,7 +1501,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1622,7 +1639,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2465,7 +2485,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4137,7 +4160,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5580,7 +5606,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 !am_tablesync_worker());
 
 			PG_RE_THROW();
 		}
@@ -5700,8 +5727,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5812,6 +5839,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5831,14 +5862,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5921,9 +5954,15 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	if (am_leader_apply_worker() || am_tablesync_worker())
+	{
+		/*
+		* Report the worker failed during either table synchronization or
+		* apply.
+		*/
+		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+										!am_tablesync_worker());
+	}
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index eecb43ec6f0..7bd3fed1f68 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9f88498ecd3..eba1b96fe26 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,19 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+/*
+ * Stores metadata about a sequence involved in logical replication.
+ */
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	Oid			seqowner;
+	bool		found_on_pub;
+	bool		entry_valid;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..46b4d89dd6e 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index e23fa9a4514..bc96ab6bfd6 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -264,6 +267,8 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
+extern void launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers,
+							   Oid relid, TimestampTz *last_start_time);
 extern void logicalrep_worker_stop(LogicalRepWorkerType wtype, Oid subid,
 								   Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
@@ -272,6 +277,7 @@ extern void logicalrep_worker_wakeup(LogicalRepWorkerType wtype, Oid subid,
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -282,11 +288,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
 pg_noreturn extern void FinishSyncWorker(void);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern void FetchRelationStates(bool *has_pending_subtables,
+								bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -351,15 +359,25 @@ extern void pa_decr_and_wait_stream_block(void);
 extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 						   XLogRecPtr remote_lsn);
 
+#define isApplyWorker(worker) ((worker)->in_use && \
+							   (worker)->type == WORKERTYPE_APPLY)
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index 557fc91c017..8b007f05567 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -1,7 +1,7 @@
 
 # Copyright (c) 2025, PostgreSQL Global Development Group
 
-# This tests that sequences are registered to be synced to the subscriber
+# This tests that sequences are synced correctly to the subscriber
 use strict;
 use warnings;
 use PostgreSQL::Test::Cluster;
@@ -14,6 +14,7 @@ my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
 # Avoid checkpoint during the test, otherwise, extra values will be fetched for
 # the sequences which will cause the test to fail randomly.
 $node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
 $node_publisher->start;
 
 # Initialize subscriber node
@@ -28,7 +29,14 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -46,10 +54,168 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
-my $result = $node_subscriber->safe_psql('postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION;
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'REFRESH PUBLICATION will not sync existing sequence');
+
+# Check - newly published sequence is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# Test: REFRESH SEQUENCES and REFRESH PUBLICATION (copy_data = off)
+#
+# 1. ALTER SUBSCRIPTION ... REFRESH SEQUENCES should re-synchronize all
+#    existing sequences, but not synchronize newly added ones.
+# 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+#    also not update sequence values for newly added sequences.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# 1. Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES;
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+# 2. Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data as false
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION WITH (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence values are not updated when (copy_data = off)
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should report an error when:
+# a) sequence definitions differ between the publisher and subscriber, or
+# b) a sequence is missing on the publisher.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
 );
-is($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+# Confirm that the error for parameters differing is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched sequence\(s\) on subscriber: \("public.regress_s4"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s4;
+));
+
+# Confirm that the error for missing sequence is logged.
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s4"\)/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index bb4e1b37005..b675de1321e 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -526,6 +526,7 @@ CopyMethod
 CopyMultiInsertBuffer
 CopyMultiInsertInfo
 CopyOnErrorChoice
+CopySeqResult
 CopySource
 CopyStmt
 CopyToRoutine
@@ -1628,6 +1629,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251029_2-0002-Documentation-for-sequence-synchronizati.patchtext/x-patch; charset=US-ASCII; name=v20251029_2-0002-Documentation-for-sequence-synchronizati.patchDownload
From fe41a6546a513a661d25a7e108f414a331fda46d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:18:07 +0530
Subject: [PATCH v20251029_2 2/3] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 230 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 282 insertions(+), 29 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a2a8b49fdb..9d54f8b26ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..86c778fc1f4 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the state of
+   sequences can be synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,200 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   on the subscriber:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An error is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to retry until
+    all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences on the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2286,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2622,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2636,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index d5f0fb7ba7c..242105e2cba 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

v20251029_2-0003-Add-seq_sync_error_count-to-subscription.patchtext/x-patch; charset=US-ASCII; name=v20251029_2-0003-Add-seq_sync_error_count-to-subscription.patchDownload
From 2f7a9df2b136ec0e7ed0addf6a2b5e2b8d7b78a9 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 29 Oct 2025 20:10:57 +0530
Subject: [PATCH v20251029_2 3/3] Add seq_sync_error_count to subscription
 statistics.

This commit introduces a new column seq_sync_error_count to subscription
statistics. The new field tracks the number of errors encountered during
sequence synchronization for each subscription.
---
 doc/src/sgml/monitoring.sgml                  |  9 +++
 src/backend/catalog/system_views.sql          |  1 +
 .../replication/logical/sequencesync.c        |  3 +
 src/backend/replication/logical/tablesync.c   |  3 +-
 src/backend/replication/logical/worker.c      | 17 ++--
 .../utils/activity/pgstat_subscription.c      | 27 +++++--
 src/backend/utils/adt/pgstatfuncs.c           | 27 ++++---
 src/include/catalog/pg_proc.dat               |  6 +-
 src/include/pgstat.h                          |  6 +-
 src/test/regress/expected/rules.out           |  3 +-
 src/test/subscription/t/026_stats.pl          | 80 ++++++++++++-------
 11 files changed, 122 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 242105e2cba..0b2402b6ea6 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2193,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 823776c1498..2b1854fd940 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1414,6 +1414,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.seq_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 7927aeac054..b0eec7dbb28 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -706,6 +706,9 @@ start_sequence_sync()
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
 			PG_RE_THROW();
 		}
 	}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 03e3e490e1c..4cc316a213a 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1530,7 +1530,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 578bfd9d778..cb8a5b0f70c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -5607,7 +5607,7 @@ start_apply(XLogRecPtr origin_startpos)
 			 */
 			AbortOutOfAnyTransaction();
 			pgstat_report_subscription_error(MySubscription->oid,
-											 !am_tablesync_worker());
+											 MyLogicalRepWorker->type);
 
 			PG_RE_THROW();
 		}
@@ -5954,15 +5954,12 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	if (am_leader_apply_worker() || am_tablesync_worker())
-	{
-		/*
-		* Report the worker failed during either table synchronization or
-		* apply.
-		*/
-		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-										!am_tablesync_worker());
-	}
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+									 MyLogicalRepWorker->type);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..35916772b9d 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->seq_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(seq_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 1fe33df2756..0184a7eef61 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2195,7 +2195,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2213,25 +2213,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "seq_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2248,6 +2250,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* seq_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->seq_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 7bd3fed1f68..b776bc02237 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,seq_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index bc8077cbae6..a2b52d97b55 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -768,7 +771,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 16753b2e4c0..cc872414af5 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.seq_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, seq_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..23f3511f9a4 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and seq_sync_error_count > 0 and sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,14 +149,17 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset
+# timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -151,8 +167,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Check that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
 # Reset a single subscription
@@ -160,10 +176,12 @@ $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats((SELECT subid FROM pg_stat_subscription_stats WHERE subname = '$sub1_name')))
 );
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -171,8 +189,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
 # Get reset timestamp
@@ -198,14 +216,17 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0
+# and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -213,18 +234,20 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
 # Reset all subscriptions
 $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats(NULL)));
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -232,13 +255,14 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -246,8 +270,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
 $reset_time1 = $node_subscriber->safe_psql($db,
-- 
2.43.0

#453vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#448)
Re: Logical Replication of sequences

On Wed, 29 Oct 2025 at 09:19, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Some review comments patch V20251029-0001 (the test code only)

======
src/test/subscription/t/036_sequences.pl

1.
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+# not update the sequence values for the new sequence.
+##########
+
+# Create a new sequence 'regress_s4'
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ CREATE SEQUENCE regress_s4;
+ INSERT INTO regress_seq_test SELECT nextval('regress_s4') FROM
generate_series(1,100);
+));
+

AFAICT the sequence `regress_s3` (from the previous test part) was
already a "new sequence" that had not yet been REFRESHED to the
subscriber. So I think maybe there wasn't any need to create another
sequence `regress_s4` for this test part.

Modified

~~~

2.
+# Check - newly published sequence values are not updated
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT last_value, log_cnt, is_called FROM regress_s4;
+));

Maybe that comment can give more details:
# Check - newly published sequence values are not updated when (copy_data = off)

Modified

~~~

3.
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should report an error when:
+# a) sequence definitions differ between the publisher and subscriber, or
+# b) a sequence is missing on the publisher.
+##########

OK, you have these mismatch parameters and missing sequences test for
"REFRESH PUBLICATION", but what about doing the same tests for
"REFRESH SEQUENCES" -- e,g, I am thinking you can ALTER/DROP some
publication that previously had synchronized OK, to verify what
happens during "REFRESH SEQUENCES".

I'm planning to do this in a later version.

The changes for the same are available in the v20251029_2 version
patch attached at [1]/messages/by-id/CALDaNm18siwD6Mamv8Dd8ubwSCw3Fi6SnB4B3Lr+4R7snLkfeA@mail.gmail.com.
[1]: /messages/by-id/CALDaNm18siwD6Mamv8Dd8ubwSCw3Fi6SnB4B3Lr+4R7snLkfeA@mail.gmail.com

Regards,
Vignesh

#454Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#452)
Re: Logical Replication of sequences

Hi Vignesh,

Some review comments for v20251029_2-0001

======
General.

1.
I think it's more readable to consistently write the state name in
uppercase. There are a few affected comments:

e.g.
+ * we identify non-ready tables and non-ready sequences together to ensure
becomes
+ * we identify non-READY tables and non-READY sequences together to ensure
e.g.
+ * has_pending_subtables: true if the subscription has one or more tables that
+ * are not in ready state, otherwise false.
+ * has_pending_subsequences: true if the subscription has one or more sequences
+ * that are not in ready state, otherwise false.
becomes
+ * has_pending_subtables: true if the subscription has one or more tables that
+ * are not in READY state, otherwise false.
+ * has_pending_subsequences: true if the subscription has one or more sequences
+ * that are not in READY state, otherwise false.
e.g.
+ /* Fetch tables and sequences that are in non-ready state. */
becomes
+ /* Fetch tables and sequences that are in non-READY state. */

======
.../replication/logical/sequencesync.c

2.
+ * A single sequencesync worker is responsible for synchronizing all sequences
+ * marked in pg_subscription_rel. It begins by retrieving the list of sequences

typo? /marked in pg_subscription_rel/mapped in pg_subscription_rel/

~~~

3.
+static void
+report_sequence_errors(StringInfo insuffperm_seqs, StringInfo mismatched_seqs,
+    StringInfo insuffperm_seqs)

Perhaps this function should also have an assertion:

Assert(insuffperm_seqs->len || mismatched_seqs->len || insuffperm_seqs->len);

~~~

copy_sequences:

4.
+ int current_index = 0;

It's not really a current index. Maybe a better name for this variable
is something more like 'cur_batch_base_index'?

~~~

LogicalRepSyncSequences:

5.
+ if (!seqinfos)
+ return;
+

Maybe this needs some comment to say it is an early exit, for when
somehow the sequencesync worker started to do some work, but then it
found that there was nothing to do. Maybe also describe how this could
happen?

======
src/backend/replication/logical/syncutils.c

launch_sync_worker

6.
+ * wtype: sync worker type.
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequencesync worker, actual relid for tablesync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers, Oid relid,
+    TimestampTz *last_start_time)

Now it might be nice to have a sanity assert to match that function
comment. e.g.

Assert((wtype == WORKERTYPE_TABLESYNC) == OidIsValid(relid));

~~~

FetchRelationStates:

7.
  foreach(lc, rstates)
  {
- rstate = palloc(sizeof(SubscriptionRelState));
- memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
- table_states_not_ready = lappend(table_states_not_ready, rstate);
+ SubscriptionRelState *subrel = (SubscriptionRelState *) lfirst(lc);

Should we change that to:
foreach_ptr(SubscriptionRelState, subrel, rstates)

======
src/backend/replication/logical/tablesync.c

ProcessSyncingTablesForApply:

8.
bool started_tx = false;
bool should_exit = false;
Relation rel = NULL;

Assert(!IsTransactionState());

/* We need up-to-date sync state info for subscription tables here. */
FetchRelationStates(NULL, NULL, &started_tx);

~

It's not necessary here to assign that `started_tx` to false at the
declaration, because FetchRelationStates will set it.
ProcessSyncingSequencesForApply() does not assign a declaration like
this, so I felt here should not do it either.

Alternatively, leave this code as-is, change
ProcessSyncingSequencesForApply() to assign it false too. And then
Assert(*started_tx == false) inside the FetchRelationStates function.

Whatever way you choose, the point is that all caller code should be consistent.

~~~

AllTablesyncsReady:

9.
Ditto previous comment about the inconsistent auto-assignment of started_tx.

~~~

HasSubscriptionTablesCached:

10.
Ditto previous comment about the inconsistent auto-assignment of started_tx.

======
src/test/subscription/t/036_sequences.pl

11.
The comments can be more helpful if they also name the sequences
involved -- It saves having to go hunting to see what is new and what
is existing. I mean only small changes like below:

BEFORE
# Check - existing sequence is not synced
# Check - newly published sequence is synced

SUGGESTION
# Check - existing sequence ('regress_s1') is not synced
# Check - newly published sequence ('regress_s2') is synced

etc in multiple places.

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#455Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#454)
1 attachment(s)
Re: Logical Replication of sequences

On Thu, Oct 30, 2025 at 8:12 AM Peter Smith <smithpb2250@gmail.com> wrote:

3.
+static void
+report_sequence_errors(StringInfo insuffperm_seqs, StringInfo mismatched_seqs,
+    StringInfo insuffperm_seqs)

Perhaps this function should also have an assertion:

Assert(insuffperm_seqs->len || mismatched_seqs->len || insuffperm_seqs->len);

I don't think such an assertion is helpful. As of now, this is called
from only one place where we ensure that one of the conditions
mentioned in assert is true but in future even if we call that without
any condition being true, it will just be a harmless call.

Other review comments:
===================
1. In copy_sequences, we start a transaction before getting the
changes from the publisher and keep using it till we copy all
sequences. If we don't need to retain lock while querying from a
publisher, then can't we even commit the xact before that and start a
new one for the copy_sequence part? Is it possible that we fetch the
required sequence information in LogicalRepSyncSequences()?

2. Isn't it better to add some comments as to why we didn't decide to
retain locks on required sequences while querying the publisher, and
rather re-validate them later?

3. To avoid a lot of parameters in get_remote_sequence_info and
validate_sequence(), can we have one function call say
get_and_validate_seq_info()? It will retrieve only the parameters
required by copy_sequence.

4.
validate_sequence()
{
...
+ /* Sequence was concurrently invalidated? */
+ if (!seqinfo->entry_valid)
+ {
+ ReleaseSysCache(tup);
+ return COPYSEQ_SKIPPED;
+ }

So, if the sequence is renamed concurrently, we get the cause as
SKIPPED but shouldn't it be COPYSEQ_MISMATCH? Is it possible to
compare the local sequence name with the remote name in
validate_sequence() after taking lock on sequence relation? If so, we
can return the state as COPYSEQ_MISMATCH.

Apart from the above, I have modified comments at various places in
the patch, see attached.

--
With Regards,
Amit Kapila.

Attachments:

v1_amit.patch.txttext/plain; charset=UTF-8; name=v1_amit.patch.txtDownload
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 9d1dc87ceb1..8d671b7a29d 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -1796,9 +1796,9 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 /*
  * Return the sequence tuple along with its page LSN.
  *
- * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation. This will also
- * be used by logical replication while synchronizing sequences.
+ * This is primarily used by pg_dump to efficiently collect sequence data
+ * without querying each sequence individually, and is also leveraged by
+ * logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
@@ -1842,11 +1842,6 @@ pg_get_sequence_data(PG_FUNCTION_ARGS)
 
 		values[0] = Int64GetDatum(seq->last_value);
 		values[1] = BoolGetDatum(seq->is_called);
-
-		/*
-		 * For details about recording the LSN, see the
-		 * UpdateSubscriptionRelState() call in copy_sequence().
-		 */
 		values[2] = LSNGetDatum(PageGetLSN(page));
 
 		UnlockReleaseBuffer(buf);
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 7927aeac054..5418327c237 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -28,12 +28,13 @@
  * to handle synchronization.
  *
  * A single sequencesync worker is responsible for synchronizing all sequences
- * marked in pg_subscription_rel. It begins by retrieving the list of sequences
- * flagged for synchronization. These sequences are then processed in batches,
- * allowing multiple entries to be synchronized within a single transaction.
- * The worker fetches the current sequence values and page LSNs from the remote
- * publisher, updates the corresponding sequences on the local subscriber, and
- * finally marks each sequence as READY upon successful synchronization.
+ * in INIT state in pg_subscription_rel. It begins by retrieving the list of
+ * sequences flagged for synchronization. These sequences are then processed
+ * in batches, allowing multiple entries to be synchronized within a single
+ * transaction. The worker fetches the current sequence values and page LSNs
+ * from the remote publisher, updates the corresponding sequences on the local
+ * subscriber, and finally marks each sequence as READY upon successful
+ * synchronization.
  *
  * Sequence state transitions follow this pattern:
  *   INIT → READY
@@ -42,19 +43,9 @@
  * sequences are synchronized per transaction. The locks on the sequence
  * relation will be periodically released at each transaction commit.
  *
- * XXX: An alternative design was considered where the launcher process would
- * periodically check for sequences that need syncing and then start the
- * sequencesync worker. However, the approach of having the apply worker
- * manage the sequencesync worker was chosen for the following reasons:
- * a) The apply worker can access the sequences that need to be synchronized
- *    from the pg_subscription_rel system catalog. Whereas the launcher process
- *    operates without direct database access so would need a framework to
- *    establish connections with the databases to retrieve the sequences for
- *    synchronization.
- * b) It utilizes the existing tablesync worker code to start the sequencesync
- *    process, thus preventing code duplication in the launcher.
- * c) It simplifies code maintenance by consolidating changes to a single
- *    location rather than multiple components.
+ * XXX: We didn't choose launcher process to maintain the launch of sequencesync
+ * worker as it didn't have database connection to access the sequences from the
+ * pg_subscription_rel system catalog that need to be synchronized.
  *-------------------------------------------------------------------------
  */
 
@@ -338,8 +329,7 @@ copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
 
 	/*
 	 * Record the remote sequence’s LSN in pg_subscription_rel and mark the
-	 * sequence as READY. The LSN represents the WAL position of the remote
-	 * sequence at the time it was synchronized.
+	 * sequence as READY.
 	 */
 	UpdateSubscriptionRelState(MySubscription->oid, seqoid,
 							   SUBREL_STATE_READY, page_lsn, false);
#456vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#455)
3 attachment(s)
Re: Logical Replication of sequences

On Thu, 30 Oct 2025 at 16:20, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Oct 30, 2025 at 8:12 AM Peter Smith <smithpb2250@gmail.com> wrote:

3.
+static void
+report_sequence_errors(StringInfo insuffperm_seqs, StringInfo mismatched_seqs,
+    StringInfo insuffperm_seqs)

Perhaps this function should also have an assertion:

Assert(insuffperm_seqs->len || mismatched_seqs->len || insuffperm_seqs->len);

I don't think such an assertion is helpful. As of now, this is called
from only one place where we ensure that one of the conditions
mentioned in assert is true but in future even if we call that without
any condition being true, it will just be a harmless call.

Other review comments:
===================
1. In copy_sequences, we start a transaction before getting the
changes from the publisher and keep using it till we copy all
sequences. If we don't need to retain lock while querying from a
publisher, then can't we even commit the xact before that and start a
new one for the copy_sequence part? Is it possible that we fetch the
required sequence information in LogicalRepSyncSequences()?

Change it to fetching it in LogicalRepSyncSequences

2. Isn't it better to add some comments as to why we didn't decide to
retain locks on required sequences while querying the publisher, and
rather re-validate them later?

Added comments for the same

3. To avoid a lot of parameters in get_remote_sequence_info and
validate_sequence(), can we have one function call say
get_and_validate_seq_info()? It will retrieve only the parameters
required by copy_sequence.

Modified

4.
validate_sequence()
{
...
+ /* Sequence was concurrently invalidated? */
+ if (!seqinfo->entry_valid)
+ {
+ ReleaseSysCache(tup);
+ return COPYSEQ_SKIPPED;
+ }

So, if the sequence is renamed concurrently, we get the cause as
SKIPPED but shouldn't it be COPYSEQ_MISMATCH? Is it possible to
compare the local sequence name with the remote name in
validate_sequence() after taking lock on sequence relation? If so, we
can return the state as COPYSEQ_MISMATCH.

Modified

Apart from the above, I have modified comments at various places in
the patch, see attached.

Thanks, I have merged them.

The attached v20251030 version patch has the changes for the same.
I have also addressed Peter's comments from [1]/messages/by-id/CAHut+Pt73A8XNnBSiF5wEfqM4mT6ofVwW9BXw3oUjbGsaYQN_g@mail.gmail.com and Hou-san's comments from [2]/messages/by-id/TY4PR01MB16907CAA300EE41232FB0BA8194FAA@TY4PR01MB16907.jpnprd01.prod.outlook.com.

[1]: /messages/by-id/CAHut+Pt73A8XNnBSiF5wEfqM4mT6ofVwW9BXw3oUjbGsaYQN_g@mail.gmail.com
[2]: /messages/by-id/TY4PR01MB16907CAA300EE41232FB0BA8194FAA@TY4PR01MB16907.jpnprd01.prod.outlook.com

Regards,
Vignesh

Attachments:

v20251030-0001-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=UTF-8; name=v20251030-0001-New-worker-for-sequence-synchronization-du.patchDownload
From 2d6f9bb421990f2e708ef656e84bffadf0256718 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 15:31:13 +0530
Subject: [PATCH v20251030 1/3] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (The command syntax remains unchanged from PG18 to PG19.)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (The command syntax remains unchanged from PG18 to PG19.)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - (A new command introduced in PG19 by a prior patch.)
    - All sequences in pg_subscription_rel are reset to DATASYNC state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/commands/sequence.c               |  18 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  51 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 690 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 134 +++-
 src/backend/replication/logical/tablesync.c   |  73 +-
 src/backend/replication/logical/worker.c      |  69 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   2 +-
 src/include/catalog/pg_subscription_rel.h     |  11 +
 src/include/commands/sequence.h               |   1 +
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  22 +-
 src/test/subscription/t/036_sequences.pl      | 180 ++++-
 src/tools/pgindent/typedefs.list              |   2 +
 18 files changed, 1153 insertions(+), 114 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 15b233a37d8..1945627ed88 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index c23dee5231c..8d671b7a29d 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,8 +953,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1056,7 +1055,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1064,14 +1063,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1080,7 +1079,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1797,8 +1796,9 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 /*
  * Return the sequence tuple along with its page LSN.
  *
- * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * This is primarily used by pg_dump to efficiently collect sequence data
+ * without querying each sequence individually, and is also leveraged by
+ * logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 95b5cae9a55..a3e84d369bd 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -248,9 +248,10 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given worker type,
  * subscription id, and relation id.
  *
- * For apply workers, the relid should be set to InvalidOid, as they manage
- * changes across all tables. For table sync workers, the relid should be set
- * to the OID of the relation being synchronized.
+ * For apply workers and sequencesync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables. For tablesync
+ * workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
 logicalrep_worker_find(LogicalRepWorkerType wtype, Oid subid, Oid relid,
@@ -334,6 +335,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -422,7 +424,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -511,8 +514,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -848,6 +859,29 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ *
+ * Note that this value is not stored in the sequencesync worker, because that
+ * has finished already and is about to exit.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_EXCLUSIVE);
+
+	worker = logicalrep_worker_find(WORKERTYPE_APPLY,
+									MyLogicalRepWorker->subid, InvalidOid,
+									true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -896,7 +930,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1610,7 +1644,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1650,6 +1684,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..3d81d671560
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,690 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state INIT, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a sequencesync worker
+ * to handle synchronization.
+ *
+ * A single sequencesync worker is responsible for synchronizing all sequences
+ * in INIT state in pg_subscription_rel. It begins by retrieving the list of
+ * sequences flagged for synchronization. These sequences are then processed
+ * in batches, allowing multiple entries to be synchronized within a single
+ * transaction. The worker fetches the current sequence values and page LSNs
+ * from the remote publisher, updates the corresponding sequences on the local
+ * subscriber, and finally marks each sequence as READY upon successful
+ * synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: We didn't choose launcher process to maintain the launch of sequencesync
+ * worker as it didn't have database connection to access the sequences from the
+ * pg_subscription_rel system catalog that need to be synchronized.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+#include "utils/pg_lsn.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 10
+
+typedef enum CopySeqResult
+{
+	COPYSEQ_SUCCESS,
+	COPYSEQ_MISMATCH,
+	COPYSEQ_INSUFFICIENT_PERM,
+	COPYSEQ_SKIPPED
+} CopySeqResult;
+
+static List *seqinfos = NIL;
+
+/*
+ * Apply worker determines if sequence synchronization is needed.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(NULL, &has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(WORKERTYPE_SEQUENCESYNC,
+												 MyLogicalRepWorker->subid,
+												 InvalidOid, true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(WORKERTYPE_SEQUENCESYNC, nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * report_sequence_errors
+ *
+ * Report discrepancies in sequence data between the publisher and subscriber.
+ * It identifies sequences that do not have sufficient privileges, as well as
+ * sequences that exist on both sides but have mismatched values.
+ */
+static void
+report_sequence_errors(StringInfo insuffperm_seqs, StringInfo mismatched_seqs,
+					   StringInfo missing_seqs)
+{
+	StringInfo	detail = makeStringInfo();
+	StringInfo	hint = makeStringInfo();
+	bool		need_separator = false;
+
+	if (insuffperm_seqs->len)
+	{
+		appendStringInfo(detail, "Insufficient privileges on the sequence(s): (%s)",
+						 insuffperm_seqs->data);
+		appendStringInfoString(hint, "Grant UPDATE privilege on the sequence(s).");
+		need_separator = true;
+	}
+
+	if (mismatched_seqs->len)
+	{
+		appendStringInfo(detail, "%sMismatched or renamed sequence(s) on subscriber: (%s)",
+						 need_separator ? "; " : "", mismatched_seqs->data);
+		appendStringInfo(hint, "%sFor mismatched sequences, alter or re-create local sequences to match the publisher's defintion.",
+						 need_separator ? " " : "");
+		need_separator = true;
+	}
+
+	if (missing_seqs->len)
+	{
+		appendStringInfo(detail, "%sMissing sequence(s) on publisher: (%s)",
+						 need_separator ? "; " : "", missing_seqs->data);
+		appendStringInfo(hint, "%sFor missing sequences, remove them locally or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.",
+						 need_separator ? " " : "");
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"", MySubscription->name),
+			errdetail("%s.", detail->data),
+			errhint("%s", hint->data));
+}
+
+/*
+ * Appends a qualified sequence name to a StringInfo buffer. Optionally
+ * increments a counter if provided. Used to build comma-separated lists of
+ * sequences.
+ */
+static void
+append_sequence_name(StringInfo buf, const char *nspname, const char *seqname,
+					 int *count)
+{
+	if (buf->len > 0)
+		appendStringInfoString(buf, ", ");
+
+	appendStringInfo(buf, "\"%s.%s\"", nspname, seqname);
+
+	if (count)
+		(*count)++;
+}
+
+/*
+ * get_and_validate_seq_info
+ *
+ * Extracts remote sequence information from the tuple slot received from the
+ * publisher and validates it against the corresponding local sequence
+ * definition.
+ */
+static CopySeqResult
+get_and_validate_seq_info(TupleTableSlot *slot, Relation *sequence_rel,
+						  LogicalRepSequenceInfo **seqinfo,
+						  XLogRecPtr *page_lsn, int64 *last_value,
+						  bool *is_called)
+{
+	bool		isnull;
+	int			col = 0;
+	int			seqidx;
+	Oid			remote_typid;
+	int64		remote_start;
+	int64		remote_increment;
+	int64		remote_min;
+	int64		remote_max;
+	bool		remote_cycle;
+	CopySeqResult result = COPYSEQ_SUCCESS;
+	HeapTuple	tup;
+	Form_pg_sequence local_seq;
+	LogicalRepSequenceInfo *seqinfo_local;
+
+	seqidx = DatumGetInt32(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	*page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_typid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_start = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_increment = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_min = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_max = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_cycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	seqinfo_local = (LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);
+	seqinfo_local->found_on_pub = true;
+	*seqinfo = seqinfo_local;
+
+	*sequence_rel = try_table_open(seqinfo_local->localrelid, RowExclusiveLock);
+	if (!*sequence_rel)			/* Sequence was concurrently dropped? */
+		return COPYSEQ_SKIPPED;
+
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo_local->localrelid));
+	if (!HeapTupleIsValid(tup)) /* Sequence was concurrently dropped? */
+		return COPYSEQ_SKIPPED;
+
+	local_seq = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Sequence parameters for remote/local are the same? */
+	if (local_seq->seqtypid != remote_typid ||
+		local_seq->seqstart != remote_start ||
+		local_seq->seqincrement != remote_increment ||
+		local_seq->seqmin != remote_min ||
+		local_seq->seqmax != remote_max ||
+		local_seq->seqcycle != remote_cycle)
+		result = COPYSEQ_MISMATCH;
+
+	/* Sequence was concurrently renamed? */
+	if (strcmp(seqinfo_local->nspname,
+			   get_namespace_name(get_rel_namespace(seqinfo_local->localrelid))) ||
+		strcmp(seqinfo_local->seqname,
+			   get_rel_name(seqinfo_local->localrelid)))
+		result = COPYSEQ_MISMATCH;
+
+	ReleaseSysCache(tup);
+	return result;
+}
+
+/*
+ * Apply remote sequence state to local sequence and mark it as
+ * synchronized (READY).
+ */
+static CopySeqResult
+copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
+			  bool is_called, XLogRecPtr page_lsn, Oid seqowner)
+{
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+	Oid			seqoid = seqinfo->localrelid;
+
+	/*
+	 * Make sure that the sequence is copied as sequence owner, unless the
+	 * user has opted out of that behaviour.
+	 */
+	if (!run_as_owner)
+		SwitchToUntrustedUser(seqowner, &ucxt);
+
+	aclresult = pg_class_aclcheck(seqoid, GetUserId(), ACL_UPDATE);
+
+	if (aclresult != ACLCHECK_OK)
+	{
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		return COPYSEQ_INSUFFICIENT_PERM;
+	}
+
+	SetSequence(seqoid, last_value, is_called);
+
+	if (!run_as_owner)
+		RestoreUserContext(&ucxt);
+
+	/*
+	 * Record the remote sequence's LSN in pg_subscription_rel and mark the
+	 * sequence as READY.
+	 */
+	UpdateSubscriptionRelState(MySubscription->oid, seqoid,
+							   SUBREL_STATE_READY, page_lsn, false);
+
+	return COPYSEQ_SUCCESS;
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ */
+static void
+copy_sequences(WalReceiverConn *conn)
+{
+	int			cur_batch_base_index = 0;
+	StringInfo	mismatched_seqs = makeStringInfo();
+	StringInfo	missing_seqs = makeStringInfo();
+	StringInfo	insuffperm_seqs = makeStringInfo();
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, list_length(seqinfos)));
+
+	while (cur_batch_base_index < list_length(seqinfos))
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+		int			batch_missing_count;
+		Relation	sequence_rel;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		for (int idx = cur_batch_base_index; idx < list_length(seqinfos); idx++)
+		{
+			LogicalRepSequenceInfo *seqinfo =
+				(LogicalRepSequenceInfo *) list_nth(seqinfos, idx);
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\', %d)",
+							 seqinfo->nspname, seqinfo->seqname, idx);
+
+			if (++batch_size == MAX_SEQUENCES_SYNC_PER_BATCH)
+				break;
+		}
+
+		/*
+		 * We deliberately avoid acquiring a local lock on the sequence before
+		 * querying the publisher to prevent potential distributed deadlocks
+		 * in bi-directional replication setups. For instance, a concurrent
+		 * ALTER SEQUENCE on one node might block this worker, while the
+		 * worker's own local lock simultaneously blocks a similar operation
+		 * on the other nod resulting in a circular wait that spans both nodes
+		 * and remains undetected.
+		 *
+		 * Note: Each entry in VALUES includes an index 'seqidx' that
+		 * represents the sequence's position in the local 'seqinfos' list.
+		 * This index is propagated to the query results and later used to
+		 * directly map the fetched publisher sequence rows back to their
+		 * corresponding local entries without relying on result order or name
+		 * matching.
+		 */
+		appendStringInfo(cmd,
+						 "SELECT s.seqidx, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname, seqidx)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not fetch sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			int64		last_value;
+			bool		is_called;
+			XLogRecPtr	page_lsn;
+			CopySeqResult sync_status;
+			LogicalRepSequenceInfo *seqinfo;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			sync_status = get_and_validate_seq_info(slot, &sequence_rel,
+													&seqinfo, &page_lsn,
+													&last_value, &is_called);
+			if (sync_status == COPYSEQ_SUCCESS)
+				sync_status = copy_sequence(seqinfo, last_value, is_called,
+											page_lsn,
+											sequence_rel->rd_rel->relowner);
+
+			switch (sync_status)
+			{
+				case COPYSEQ_MISMATCH:
+					append_sequence_name(mismatched_seqs, seqinfo->nspname,
+										 seqinfo->seqname,
+										 &batch_mismatched_count);
+					break;
+				case COPYSEQ_INSUFFICIENT_PERM:
+					append_sequence_name(insuffperm_seqs, seqinfo->nspname,
+										 seqinfo->seqname,
+										 &batch_insuffperm_count);
+					break;
+				case COPYSEQ_SKIPPED:
+					ereport(LOG,
+							errmsg("skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+								   seqinfo->nspname, seqinfo->seqname));
+					batch_skipped_count++;
+					break;
+
+				case COPYSEQ_SUCCESS:
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+					batch_succeeded_count++;
+					break;
+			}
+
+			if (sequence_rel)
+				table_close(sequence_rel, NoLock);
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		batch_missing_count = batch_size - (batch_succeeded_count +
+											batch_mismatched_count +
+											batch_insuffperm_count +
+											batch_skipped_count);
+
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+					   MySubscription->name,
+					   (cur_batch_base_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1,
+					   batch_size, batch_succeeded_count, batch_skipped_count,
+					   batch_mismatched_count, batch_insuffperm_count,
+					   batch_missing_count));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		if (batch_missing_count)
+		{
+			for (int idx = cur_batch_base_index; idx < batch_size; idx++)
+			{
+				LogicalRepSequenceInfo *seqinfo =
+					(LogicalRepSequenceInfo *) list_nth(seqinfos, idx);
+
+				/* If the sequence was not found on publisher, record it */
+				if (!seqinfo->found_on_pub)
+					append_sequence_name(missing_seqs, seqinfo->nspname,
+										 seqinfo->seqname, NULL);
+			}
+		}
+
+		/*
+		 * cur_batch_base_index is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		cur_batch_base_index += batch_size;
+	}
+
+	/* Report permission issues, mismatches, or missing sequences */
+	if (insuffperm_seqs->len || mismatched_seqs->len || missing_seqs->len)
+		report_sequence_errors(insuffperm_seqs, mismatched_seqs, missing_seqs);
+
+	destroyStringInfo(missing_seqs);
+	destroyStringInfo(mismatched_seqs);
+	destroyStringInfo(insuffperm_seqs);
+}
+
+/*
+ * Determines which sequences require synchronization and initiates their
+ * synchronization process.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(SUBREL_STATE_INIT));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		char		relkind;
+		LogicalRepSequenceInfo *seq;
+		Relation	sequence_rel;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		/* Skip if the relation is not a sequence */
+		relkind = get_rel_relkind(subrel->srrelid);
+		if (relkind != RELKIND_SEQUENCE)
+			continue;
+
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+
+		/* Skip if sequence was dropped concurrently */
+		if (!sequence_rel)
+			continue;
+
+		/*
+		 * Worker needs to process sequences across transaction boundary, so
+		 * allocate them under long-lived context.
+		 */
+		oldctx = MemoryContextSwitchTo(TopMemoryContext);
+
+		seq = palloc0_object(LogicalRepSequenceInfo);
+		seq->localrelid = subrel->srrelid;
+		seq->nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+		seq->seqname = pstrdup(RelationGetRelationName(sequence_rel));
+		seqinfos = lappend(seqinfos, seq);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/*
+	 * Exit early if no catalog entries found, likely due to concurrent drops.
+	 */
+	if (!seqinfos)
+		return;
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Allocate the slot name in long-lived context on return. Note that we don't
+ * handle FATAL errors which are probably because of system resource error and
+ * are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker();
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index ae8c9385916..c79a9c3baac 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -48,6 +49,8 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
 pg_noreturn void
 FinishSyncWorker(void)
 {
+	Assert(am_sequencesync_worker() || am_tablesync_worker());
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,15 +65,29 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (am_sequencesync_worker())
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
+	else
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
-							 InvalidOid);
+	if (am_sequencesync_worker())
+	{
+		/* Find the leader apply worker and reset last_seqsync_start_time. */
+		logicalrep_reset_seqsync_start_time();
+	}
+	else
+	{
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
+								 InvalidOid);
+	}
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -86,7 +103,52 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * wtype: sync worker type.
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequencesync worker, actual relid for tablesync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers, Oid relid,
+				   TimestampTz *last_start_time)
+{
+	TimestampTz now;
+
+	Assert((wtype == WORKERTYPE_TABLESYNC && OidIsValid(relid)) ||
+		   (wtype == WORKERTYPE_SEQUENCESYNC && !OidIsValid(relid)));
+
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers >= max_sync_workers_per_subscription)
+		return;
+
+	now = GetCurrentTimestamp();
+
+	if (!(*last_start_time) ||
+		TimestampDifferenceExceeds(*last_start_time, now,
+								   wal_retrieve_retry_interval))
+	{
+		/*
+		 * Set the last_start_time even if we fail to start the worker, so
+		 * that we won't retry until wal_retrieve_retry_interval has elapsed.
+		 */
+		*last_start_time = now;
+		(void) logicalrep_worker_launch(wtype,
+										MyLogicalRepWorker->dbid,
+										MySubscription->oid,
+										MySubscription->name,
+										MyLogicalRepWorker->userid,
+										relid, DSM_HANDLE_INVALID, false);
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -108,6 +170,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -117,17 +185,29 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-READY tables and non-READY sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * has_pending_subtables: true if the subscription has one or more tables that
+ * are not in READY state, otherwise false.
+ * has_pending_subsequences: true if the subscription has one or more sequences
+ * that are not in READY state, otherwise false.
  */
-bool
-FetchRelationStates(bool *started_tx)
+void
+FetchRelationStates(bool *has_pending_subtables,
+					bool *has_pending_subsequences,
+					bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready are declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
 	*started_tx = false;
 
@@ -135,10 +215,10 @@ FetchRelationStates(bool *started_tx)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
-		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -150,17 +230,23 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		/* Fetch tables and sequences that are in non-READY state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
+		foreach_ptr(SubscriptionRelState, subrel, rstates)
 		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+			if (get_rel_relkind(subrel->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+			{
+				rstate = palloc(sizeof(SubscriptionRelState));
+				memcpy(rstate, subrel, sizeof(SubscriptionRelState));
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
+			}
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -185,5 +271,9 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
-	return has_subtables;
+	if (has_pending_subtables)
+		*has_pending_subtables = has_subtables;
+
+	if (has_pending_subsequences)
+		*has_pending_subsequences = has_subsequences_non_ready;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 58c98488d7b..c3b525e6557 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -374,14 +374,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	};
 	static HTAB *last_start_times = NULL;
 	ListCell   *lc;
-	bool		started_tx = false;
+	bool		started_tx;
 	bool		should_exit = false;
 	Relation	rel = NULL;
 
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(NULL, NULL, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -415,6 +415,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -428,11 +436,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -552,43 +555,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(WORKERTYPE_TABLESYNC, nsyncworkers,
+								   rstate->relid, &hentry->last_start_time);
 			}
 		}
 	}
@@ -1596,7 +1575,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1618,11 +1597,11 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		started_tx;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_tables, NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1634,7 +1613,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1649,10 +1628,10 @@ bool
 HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
-	bool		has_subrels;
+	bool		has_tables;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_tables, NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1660,7 +1639,7 @@ HasSubscriptionTablesCached(void)
 		pgstat_report_stat(true);
 	}
 
-	return has_subrels;
+	return has_tables;
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 7edd1c9cf06..8026a007ec3 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1243,7 +1248,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1365,7 +1373,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1421,7 +1432,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1487,7 +1501,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1622,7 +1639,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2465,7 +2485,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4137,7 +4160,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5580,7 +5606,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 !am_tablesync_worker());
 
 			PG_RE_THROW();
 		}
@@ -5700,8 +5727,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5812,6 +5839,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5831,14 +5862,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5921,9 +5954,15 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	if (am_leader_apply_worker() || am_tablesync_worker())
+	{
+		/*
+		 * Report the worker failed during either table synchronization or
+		 * apply.
+		 */
+		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+										 !am_tablesync_worker());
+	}
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9121a382f76..34b7fddb0e7 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9f88498ecd3..034c4524932 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,17 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+/*
+ * Stores metadata about a sequence involved in logical replication.
+ */
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	bool		found_on_pub;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..46b4d89dd6e 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index e23fa9a4514..32ef365f4a6 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -264,6 +267,8 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
+extern void launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers,
+							   Oid relid, TimestampTz *last_start_time);
 extern void logicalrep_worker_stop(LogicalRepWorkerType wtype, Oid subid,
 								   Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
@@ -272,6 +277,7 @@ extern void logicalrep_worker_wakeup(LogicalRepWorkerType wtype, Oid subid,
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -282,11 +288,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
 pg_noreturn extern void FinishSyncWorker(void);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern void FetchRelationStates(bool *has_pending_subtables,
+								bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -353,13 +361,21 @@ extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index 557fc91c017..897f08877ed 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -1,7 +1,7 @@
 
 # Copyright (c) 2025, PostgreSQL Global Development Group
 
-# This tests that sequences are registered to be synced to the subscriber
+# This tests that sequences are synced correctly to the subscriber
 use strict;
 use warnings;
 use PostgreSQL::Test::Cluster;
@@ -14,6 +14,7 @@ my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
 # Avoid checkpoint during the test, otherwise, extra values will be fetched for
 # the sequences which will cause the test to fail randomly.
 $node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
 $node_publisher->start;
 
 # Initialize subscriber node
@@ -28,7 +29,14 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -46,10 +54,170 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
-my $result = $node_subscriber->safe_psql('postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION;
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence ('regress_s1') is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'REFRESH PUBLICATION will not sync existing sequence');
+
+# Check - newly published sequence ('regress_s2') is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# Test: REFRESH SEQUENCES and REFRESH PUBLICATION (copy_data = off)
+#
+# 1. ALTER SUBSCRIPTION ... REFRESH SEQUENCES should re-synchronize all
+#    existing sequences, but not synchronize newly added ones.
+# 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+#    also not update sequence values for newly added sequences.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# 1. Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES;
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences ('regress_s1' and 'regress_s2') are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence ('regress_s3') is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+# 2. Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data as false
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION WITH (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence ('regress_s3') is not synced when
+# (copy_data = off).
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
+);
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should report an error when:
+# a) sequence definitions differ between the publisher and subscriber, or
+# b) a sequence is missing on the publisher.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql(
+	'postgres', "
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION"
 );
-is($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+# Verify that an error is logged for parameter differences on sequence
+# ('regress_s4').
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Mismatched or renamed sequence\(s\) on subscriber: \("public.regress_s4"\)/,
+	$log_offset);
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	DROP SEQUENCE regress_s4;
+));
+
+# Verify that an error is logged for the missing sequence ('regress_s4').
+$node_subscriber->wait_for_log(
+	qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.* Missing sequence\(s\) on publisher: \("public.regress_s4"\)/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ac2da4c98cf..735dd8317d6 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -526,6 +526,7 @@ CopyMethod
 CopyMultiInsertBuffer
 CopyMultiInsertInfo
 CopyOnErrorChoice
+CopySeqResult
 CopySource
 CopyStmt
 CopyToRoutine
@@ -1628,6 +1629,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251030-0003-Add-seq_sync_error_count-to-subscription-s.patchtext/x-patch; charset=US-ASCII; name=v20251030-0003-Add-seq_sync_error_count-to-subscription-s.patchDownload
From 7901e1e84978817afda63fa9ec9d3f5e6b18a2b1 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 30 Oct 2025 21:01:47 +0530
Subject: [PATCH v20251030 3/3] Add seq_sync_error_count to subscription
 statistics.

This commit introduces a new column seq_sync_error_count to subscription
statistics. The new field tracks the number of errors encountered during
sequence synchronization for each subscription.
---
 doc/src/sgml/monitoring.sgml                  |  9 +++
 src/backend/catalog/system_views.sql          |  1 +
 .../replication/logical/sequencesync.c        |  3 +
 src/backend/replication/logical/tablesync.c   |  3 +-
 src/backend/replication/logical/worker.c      | 17 ++--
 .../utils/activity/pgstat_subscription.c      | 27 +++++--
 src/backend/utils/adt/pgstatfuncs.c           | 27 ++++---
 src/include/catalog/pg_proc.dat               |  6 +-
 src/include/pgstat.h                          |  6 +-
 src/test/regress/expected/rules.out           |  3 +-
 src/test/subscription/t/026_stats.pl          | 80 ++++++++++++-------
 11 files changed, 122 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index e3523ac882d..1dc0024ab92 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2193,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index dec8df4f8ee..059e8778ca7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1415,6 +1415,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.seq_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 3d81d671560..4069e548d26 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -670,6 +670,9 @@ start_sequence_sync()
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
 			PG_RE_THROW();
 		}
 	}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index c3b525e6557..db48e6f6d91 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1530,7 +1530,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 8026a007ec3..cb8a5b0f70c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -5607,7 +5607,7 @@ start_apply(XLogRecPtr origin_startpos)
 			 */
 			AbortOutOfAnyTransaction();
 			pgstat_report_subscription_error(MySubscription->oid,
-											 !am_tablesync_worker());
+											 MyLogicalRepWorker->type);
 
 			PG_RE_THROW();
 		}
@@ -5954,15 +5954,12 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	if (am_leader_apply_worker() || am_tablesync_worker())
-	{
-		/*
-		 * Report the worker failed during either table synchronization or
-		 * apply.
-		 */
-		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-										 !am_tablesync_worker());
-	}
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+									 MyLogicalRepWorker->type);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..35916772b9d 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->seq_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(seq_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a710508979e..1521d6e2ab4 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2203,7 +2203,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2221,25 +2221,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "seq_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2256,6 +2258,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* seq_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->seq_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 34b7fddb0e7..5cf9e12fcb9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,seq_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 7ae503e71a2..a0610bb3e31 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -769,7 +772,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 77e25ca029e..fe20f613c3a 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.seq_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, seq_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..23f3511f9a4 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and seq_sync_error_count > 0 and sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,14 +149,17 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset
+# timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -151,8 +167,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Check that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
 # Reset a single subscription
@@ -160,10 +176,12 @@ $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats((SELECT subid FROM pg_stat_subscription_stats WHERE subname = '$sub1_name')))
 );
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -171,8 +189,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
 # Get reset timestamp
@@ -198,14 +216,17 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0
+# and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -213,18 +234,20 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
 # Reset all subscriptions
 $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats(NULL)));
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -232,13 +255,14 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -246,8 +270,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
 $reset_time1 = $node_subscriber->safe_psql($db,
-- 
2.43.0

v20251030-0002-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20251030-0002-Documentation-for-sequence-synchronization.patchDownload
From a0bbb09a5861433b6d6dfd75186760b08a3b1c26 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:18:07 +0530
Subject: [PATCH v20251030 2/3] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 230 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 282 insertions(+), 29 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a2a8b49fdb..9d54f8b26ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..86c778fc1f4 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the state of
+   sequences can be synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,200 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   on the subscriber:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An error is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to retry until
+    all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences on the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2286,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2622,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2636,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index f3bf527d5b4..e3523ac882d 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

#457Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#456)
Re: Logical Replication of sequences

Hi Vignesh,

Some review comments for v20251030-0001

======
Commit message

1.
3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
- (A new command introduced in PG19 by a prior patch.)
- All sequences in pg_subscription_rel are reset to DATASYNC state.
~

DATASYNC? Just above, it says the only possible states for sequences
are INIT and READY.

======
.../replication/logical/sequencesync.c

2.
+ * A single sequencesync worker is responsible for synchronizing all sequences
+ * in INIT state in pg_subscription_rel. It begins by retrieving the list of
+ * sequences flagged for synchronization. These sequences are then processed
+ * in batches, allowing multiple entries to be synchronized within a single
+ * transaction. The worker fetches the current sequence values and page LSNs
+ * from the remote publisher, updates the corresponding sequences on the local
+ * subscriber, and finally marks each sequence as READY upon successful
+ * synchronization.
+ *

Those first 2 sentences seem repetitive because AFAIK "in INIT state"
and "flagged for synchronization" are exactly the same thing.

SUGGESTION
A single sequencesync worker is responsible for synchronizing all
sequences. It begins by retrieving the list of sequences that are
flagged for needing synchronization (e.g. those with INIT state).

~~~

get_and_validate_seq_info:

3.
+/*
+ * get_and_validate_seq_info
+ *
+ * Extracts remote sequence information from the tuple slot received from the
+ * publisher and validates it against the corresponding local sequence
+ * definition.
+ */

Missing comma.

/publisher and validates/publisher, and validates/

~~~

4.
Now that this function is in 2 parts, I think each part should be
clearly identified with comments, something like:

PART 1:
/*
* 1. Extract sequence information from the tuple slot received from the
* publisher
*/

PART 2:
/*
* 2. Compare the remote sequence definition to the local sequence definition,
* and report any discrepancies.
*/

~~~

5.
+ seqinfo_local = (LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);
+ seqinfo_local->found_on_pub = true;
+ *seqinfo = seqinfo_local;

Is this separate `seqinfo_local` variable needed? It seems always
unconditionally assigned to the parameter, so you might as well just
do without the extra variable. Maybe just rename the parameter as
`seqinfo_local`?

~~~

6.
+ if (!*sequence_rel) /* Sequence was concurrently dropped? */
+ return COPYSEQ_SKIPPED;
+
+ tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo_local->localrelid));
+ if (!HeapTupleIsValid(tup)) /* Sequence was concurrently dropped? */
+ return COPYSEQ_SKIPPED;

Nit. IMO the code was easier to read in the previous patch version,
when it was commented above the code like:

/* Sequence was concurrently dropped */
if (!*sequence_rel)
return COPYSEQ_SKIPPED;

/* Sequence was concurrently dropped */
tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
if (!HeapTupleIsValid(tup))
return COPYSEQ_SKIPPED;
~~~

copy_sequence:

7.
+static CopySeqResult
+copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
+   bool is_called, XLogRecPtr page_lsn, Oid seqowner)
+{
+ UserContext ucxt;
+ AclResult aclresult;
+ bool run_as_owner = MySubscription->runasowner;
+ Oid seqoid = seqinfo->localrelid;
+
+ /*
+ * Make sure that the sequence is copied as sequence owner, unless the
+ * user has opted out of that behaviour.
+ */
+ if (!run_as_owner)
+ SwitchToUntrustedUser(seqowner, &ucxt);

The code/comment seems contradictory because IMO it is vague what
runasowner member means -- which owner? AFAIK `run_as_owner` really
means "run as *subscription* owner". Ideally, the
MySubscription->runasowner member might be renamed to 'runassubowner'
but maybe that is a bigger change than you want to make for this
patch.

So, maybe just the comment can be rewritten for more clarity.

SUGGESTION:
If the user did not opt to run as the owner of the subscription
('run_as_owner'), then copy the sequence as the owner of the sequence.

(Also, make the similar comment change for the equivalent place in tablesync.c).

~~~

copy_sequences:

8.
+ ereport(LOG,
+ errmsg("logical replication sequence synchronization for
subscription \"%s\" - total unsynchronized: %d",
+    MySubscription->name, list_length(seqinfos)));
+
+ while (cur_batch_base_index < list_length(seqinfos))

Would it be tidier to declare int n_seqinfos = list_length(seqinfos);
instead of using list_length() multiple times in this function.

~~~

9.
+ * We deliberately avoid acquiring a local lock on the sequence before
+ * querying the publisher to prevent potential distributed deadlocks
+ * in bi-directional replication setups. For instance, a concurrent
+ * ALTER SEQUENCE on one node might block this worker, while the
+ * worker's own local lock simultaneously blocks a similar operation
+ * on the other nod resulting in a circular wait that spans both nodes
+ * and remains undetected.

typo: /nod/node/

~~~

10.
+ int64 last_value;
+ bool is_called;
+ XLogRecPtr page_lsn;
+ CopySeqResult sync_status;
+ LogicalRepSequenceInfo *seqinfo;
+
+ CHECK_FOR_INTERRUPTS();
+
+ if (ConfigReloadPending)
+ {
+ ConfigReloadPending = false;
+ ProcessConfigFile(PGC_SIGHUP);
+ }
+
+ sync_status = get_and_validate_seq_info(slot, &sequence_rel,
+ &seqinfo, &page_lsn,
+ &last_value, &is_called);
+ if (sync_status == COPYSEQ_SUCCESS)
+ sync_status = copy_sequence(seqinfo, last_value, is_called,
+ page_lsn,
+ sequence_rel->rd_rel->relowner);

Why does LogicalRepSequenceInfo only have to be metadata? Can't those
`page_lsn`, `last_value`, and `is_called` also be made members of
LogicalRepSequenceInfo, so you don't have to declare them and pass
them all in and out as parameters here?

======
src/test/subscription/t/036_sequences.pl

11.
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ DROP SEQUENCE regress_s4;
+));
+
+# Verify that an error is logged for the missing sequence ('regress_s4').
+$node_subscriber->wait_for_log(
+ qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence
synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.*
Missing sequence\(s\) on publisher: \("public.regress_s4"\)/,
+ $log_offset);

I felt that psql DROP code belongs below the comment too.
Alternatively, add another comment for that DROP, like:
# Drop the sequence ('regress_s4') in preparation for the next test

~~~

12.
There is still a yet-to-be-implemented test combination as previously
reported [1, comment #3], right?

======
[1]: /messages/by-id/CAHut+Pu+wTfWwCUUUL+cqsHFZ-ptY6CWe8FYM3by901NvLArCQ@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

#458Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#457)
Re: Logical Replication of sequences

On Fri, Oct 31, 2025 at 7:34 AM Peter Smith <smithpb2250@gmail.com> wrote:

======
.../replication/logical/sequencesync.c

2.
+ * A single sequencesync worker is responsible for synchronizing all sequences
+ * in INIT state in pg_subscription_rel. It begins by retrieving the list of
+ * sequences flagged for synchronization. These sequences are then processed
+ * in batches, allowing multiple entries to be synchronized within a single
+ * transaction. The worker fetches the current sequence values and page LSNs
+ * from the remote publisher, updates the corresponding sequences on the local
+ * subscriber, and finally marks each sequence as READY upon successful
+ * synchronization.
+ *

Those first 2 sentences seem repetitive because AFAIK "in INIT state"
and "flagged for synchronization" are exactly the same thing.

SUGGESTION
A single sequencesync worker is responsible for synchronizing all
sequences. It begins by retrieving the list of sequences that are
flagged for needing synchronization (e.g. those with INIT state).

/e.g/i.e. We don't have multiple such states, so let's be specific.

~~~

4.
Now that this function is in 2 parts, I think each part should be
clearly identified with comments, something like:

PART 1:
/*
* 1. Extract sequence information from the tuple slot received from the
* publisher
*/

PART 2:
/*
* 2. Compare the remote sequence definition to the local sequence definition,
* and report any discrepancies.
*/

I don't see the need for such explicit comments as the same is
apparent from the code.

~~~

5.
+ seqinfo_local = (LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);
+ seqinfo_local->found_on_pub = true;
+ *seqinfo = seqinfo_local;

Is this separate `seqinfo_local` variable needed? It seems always
unconditionally assigned to the parameter, so you might as well just
do without the extra variable. Maybe just rename the parameter as
`seqinfo_local`?

We can do without a local variable as well but it appears neat to
modify and use local variable. I think it is a matter of personal
choice, so either way is fine but I would prefer using local variable
for this.

12.
There is still a yet-to-be-implemented test combination as previously
reported [1, comment #3], right?

I don't think adding similar negative tests adds value. So, we can skip those.

--
With Regards,
Amit Kapila.

#459Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#456)
Re: Logical Replication of sequences

Hi Vignesh,

For later.... here are some review comments for the documentation
patch v20251030-0002

======
doc/src/sgml/config.sgml

wal_retrieve_retry_interval:

1.
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>

I think you can simplify that.

SUGGESTION
In logical replication, this parameter also limits how quickly a
failing replication apply worker, or table/sequence synchronization
worker will be respawned.

~~~

max_logical_replication_workers:

2.
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>

I think you can simplify that.

SUGGESTION
This includes leader apply workers, parallel apply workers, and
table/sequence synchronization workers.

~~~

max_sync_workers_per_subscription:

3.
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>

But, there is no parallelism at all for sequence copies, because there
is only one sequencesync worker (as the following docs paragraph
says), so maybe we do not need this docs change.

NOTE -- see the comment #12 below, and maybe use wording like that.

======
doc/src/sgml/logical-replication.sgml

Section 29.1 Publication:

4.
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the state of
+   sequences can be synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.

Not sure about the wording "the state of". Maybe it can be simplified?

SUGGESTION
Unlike tables, sequences can be synchronized at any time.

~~~

5.
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>

AFAIK it's not going to get any newly added sequences so it is not
really "all sequences" so this seems misleading. I thought it should
be like below.

SUGGESTION
use ALTER SUBSCRIPTION ... REFRESH SEQUENCES to re-synchronize all
sequences currently known to the subscription.

~~~

Section 29.7.1. Sequence Definition Mismatches:

6.
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An error is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to retry until
+    all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>

It seems a bit misleading. e.g. AFAIK the "The apply worker detects
this failure" is not true. IIUC, the apply worker simply finds some
sequences that still have INIT state, so really it has no knowledge of
failure at all, right?

Consider rewording this part.

SUGGESTION
The sequence synchronization worker validates that sequence
definitions match between publisher and subscriber. If mismatches
exist, the worker logs an error identifying them and exits. The apply
worker continues respawning the sequence synchronization worker until
synchronization succeeds.

~~~

Section 29.7.2. Refreshing Stale Sequences:

7.
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>

I didn't see why the wording "To verify" was needed here. Below is a
slightly simpler alternative for these paragraphs.

SUGGESTION
Subscriber sequence values drift out of sync as the publisher advances
them. Compare values between publisher and subscriber, then run ALTER
SUBSCRIPTION ... REFRESH SEQUENCES to resynchronize if necessary.

~~~

Section 29.7.3. Examples.

8. GENERAL. Prompts in examples

I think using prompts like "test_pub#" in the examples is frowned upon
because it makes cutting directly from the examples more difficult.
Similarly, the result of the commands is not shown.

See other PG18 logical replication examples for why the current style
is.... e.g. more like this:
/* pub # */ CREATE TABLE t1(a int, b int, c text, PRIMARY KEY(a,c));
/* pub # */ CREATE TABLE t2(d int, e int, f int, PRIMARY KEY(d));

~~~

9.
+    Re-synchronize all the sequences on the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.

SUGGESTION
Re-synchronize all sequences known to the subscriber using...

~~~

Section 29.9. Restrictions #

10.
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.

IIUC the "ALTER SUBSCRIPTION ... REFRESH SEQUENCES" is only going to
resync the sequences that the subscriber already knew about. So, if
you really wanted to get the latest of everything won't the user need
to execute double-commands just in case there are some new sequences
at the publisher?

e.g.
First, ALTER SUBSCRIPTION REFRESH PUBLICATION
Then, ALTER SUBSCRIPTION REFRESH SEQUENCES

~~~

Section 29.13.2. Subscribers #

11.
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table
synchronization workers, and a sequence
+    synchronization worker.

I think this can be worded similar to the config.sgml

SUGGESTION
... plus some reserve for the parallel apply workers, and
table/sequence synchronization workers.

~~

12.
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>

Oh, perhaps this is the wording that should have been used in
config.sgml (review comment #3) to avoid implying about sequencesync
workers helping with parallelism.

======
doc/src/sgml/monitoring.sgml

13.
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>

This docs fragment probably belongs with the pg_stats patch, not here.

======
doc/src/sgml/ref/create_subscription.sgml

14.
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>

In many places for this page the patch says "This parameter is not
applicable for sequences."

IMO that is ambiguous. It is not clear if the parameter is silently
ignored, if it will give an error, or what?

Maybe you can clarify by saying "is ignored" or "has no effect",
instead of "is not applicable"

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#460vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#457)
3 attachment(s)
Re: Logical Replication of sequences

On Fri, 31 Oct 2025 at 07:34, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

Some review comments for v20251030-0001

======
Commit message

1.
3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
- (A new command introduced in PG19 by a prior patch.)
- All sequences in pg_subscription_rel are reset to DATASYNC state.
~

DATASYNC? Just above, it says the only possible states for sequences
are INIT and READY.

Modified

======
.../replication/logical/sequencesync.c

2.
+ * A single sequencesync worker is responsible for synchronizing all sequences
+ * in INIT state in pg_subscription_rel. It begins by retrieving the list of
+ * sequences flagged for synchronization. These sequences are then processed
+ * in batches, allowing multiple entries to be synchronized within a single
+ * transaction. The worker fetches the current sequence values and page LSNs
+ * from the remote publisher, updates the corresponding sequences on the local
+ * subscriber, and finally marks each sequence as READY upon successful
+ * synchronization.
+ *

Those first 2 sentences seem repetitive because AFAIK "in INIT state"
and "flagged for synchronization" are exactly the same thing.

SUGGESTION
A single sequencesync worker is responsible for synchronizing all
sequences. It begins by retrieving the list of sequences that are
flagged for needing synchronization (e.g. those with INIT state).

Modified with slight changes

~~~

get_and_validate_seq_info:

3.
+/*
+ * get_and_validate_seq_info
+ *
+ * Extracts remote sequence information from the tuple slot received from the
+ * publisher and validates it against the corresponding local sequence
+ * definition.
+ */

Missing comma.

/publisher and validates/publisher, and validates/

Modified

~~~

4.
Now that this function is in 2 parts, I think each part should be
clearly identified with comments, something like:

PART 1:
/*
* 1. Extract sequence information from the tuple slot received from the
* publisher
*/

PART 2:
/*
* 2. Compare the remote sequence definition to the local sequence definition,
* and report any discrepancies.
*/

I felt this is not required

~~~

5.
+ seqinfo_local = (LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);
+ seqinfo_local->found_on_pub = true;
+ *seqinfo = seqinfo_local;

Is this separate `seqinfo_local` variable needed? It seems always
unconditionally assigned to the parameter, so you might as well just
do without the extra variable. Maybe just rename the parameter as
`seqinfo_local`?

I preferred the existing as it is more readable

~~~

6.
+ if (!*sequence_rel) /* Sequence was concurrently dropped? */
+ return COPYSEQ_SKIPPED;
+
+ tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo_local->localrelid));
+ if (!HeapTupleIsValid(tup)) /* Sequence was concurrently dropped? */
+ return COPYSEQ_SKIPPED;

Nit. IMO the code was easier to read in the previous patch version,
when it was commented above the code like:

/* Sequence was concurrently dropped */
if (!*sequence_rel)
return COPYSEQ_SKIPPED;

/* Sequence was concurrently dropped */
tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo->localrelid));
if (!HeapTupleIsValid(tup))
return COPYSEQ_SKIPPED;

Updated

~~~

copy_sequence:

7.
+static CopySeqResult
+copy_sequence(LogicalRepSequenceInfo *seqinfo, int64 last_value,
+   bool is_called, XLogRecPtr page_lsn, Oid seqowner)
+{
+ UserContext ucxt;
+ AclResult aclresult;
+ bool run_as_owner = MySubscription->runasowner;
+ Oid seqoid = seqinfo->localrelid;
+
+ /*
+ * Make sure that the sequence is copied as sequence owner, unless the
+ * user has opted out of that behaviour.
+ */
+ if (!run_as_owner)
+ SwitchToUntrustedUser(seqowner, &ucxt);

The code/comment seems contradictory because IMO it is vague what
runasowner member means -- which owner? AFAIK `run_as_owner` really
means "run as *subscription* owner". Ideally, the
MySubscription->runasowner member might be renamed to 'runassubowner'
but maybe that is a bigger change than you want to make for this
patch.

So, maybe just the comment can be rewritten for more clarity.

SUGGESTION:
If the user did not opt to run as the owner of the subscription
('run_as_owner'), then copy the sequence as the owner of the sequence.

(Also, make the similar comment change for the equivalent place in tablesync.c).

Updated

~~~

copy_sequences:

8.
+ ereport(LOG,
+ errmsg("logical replication sequence synchronization for
subscription \"%s\" - total unsynchronized: %d",
+    MySubscription->name, list_length(seqinfos)));
+
+ while (cur_batch_base_index < list_length(seqinfos))

Would it be tidier to declare int n_seqinfos = list_length(seqinfos);
instead of using list_length() multiple times in this function.

Modified

~~~

9.
+ * We deliberately avoid acquiring a local lock on the sequence before
+ * querying the publisher to prevent potential distributed deadlocks
+ * in bi-directional replication setups. For instance, a concurrent
+ * ALTER SEQUENCE on one node might block this worker, while the
+ * worker's own local lock simultaneously blocks a similar operation
+ * on the other nod resulting in a circular wait that spans both nodes
+ * and remains undetected.

typo: /nod/node/

Modified

~~~

10.
+ int64 last_value;
+ bool is_called;
+ XLogRecPtr page_lsn;
+ CopySeqResult sync_status;
+ LogicalRepSequenceInfo *seqinfo;
+
+ CHECK_FOR_INTERRUPTS();
+
+ if (ConfigReloadPending)
+ {
+ ConfigReloadPending = false;
+ ProcessConfigFile(PGC_SIGHUP);
+ }
+
+ sync_status = get_and_validate_seq_info(slot, &sequence_rel,
+ &seqinfo, &page_lsn,
+ &last_value, &is_called);
+ if (sync_status == COPYSEQ_SUCCESS)
+ sync_status = copy_sequence(seqinfo, last_value, is_called,
+ page_lsn,
+ sequence_rel->rd_rel->relowner);

Why does LogicalRepSequenceInfo only have to be metadata? Can't those
`page_lsn`, `last_value`, and `is_called` also be made members of
LogicalRepSequenceInfo, so you don't have to declare them and pass
them all in and out as parameters here?

Modified

======
src/test/subscription/t/036_sequences.pl

11.
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ DROP SEQUENCE regress_s4;
+));
+
+# Verify that an error is logged for the missing sequence ('regress_s4').
+$node_subscriber->wait_for_log(
+ qr/ERROR: ( [A-Z0-9]+:)? logical replication sequence
synchronization failed for subscription "regress_seq_sub"\n.*DETAIL:.*
Missing sequence\(s\) on publisher: \("public.regress_s4"\)/,
+ $log_offset);

I felt that psql DROP code belongs below the comment too.
Alternatively, add another comment for that DROP, like:
# Drop the sequence ('regress_s4') in preparation for the next test

Modified

~~~

12.
There is still a yet-to-be-implemented test combination as previously
reported [1, comment #3], right?

I'm skipping this to keep minimal tests.

Apart from these, the following improvements were done:
a) The error message has been improved to split into 3 warning
messages a) missing sequences b) mismatch sequences and insufficient
privileges sequences and finally an error message as the original was
too long and difficult to map each of the hints.
b) errmsg_plural and errhint_plural were used for error messages to
differentiate based on sequence/sequences based on sequence count.
c) Changed the batch log message to debug1 level
d) Few of the comments were improved to add more clarity.

The attached v20251101 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20251101-0003-Add-seq_sync_error_count-to-subscription-s.patchapplication/octet-stream; name=v20251101-0003-Add-seq_sync_error_count-to-subscription-s.patchDownload
From f199a4c740da2a572dd281e9e99b21d3419e3558 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 30 Oct 2025 21:01:47 +0530
Subject: [PATCH v20251101 3/3] Add seq_sync_error_count to subscription
 statistics.

This commit introduces a new column seq_sync_error_count to subscription
statistics. The new field tracks the number of errors encountered during
sequence synchronization for each subscription.
---
 doc/src/sgml/monitoring.sgml                  |  9 +++
 src/backend/catalog/system_views.sql          |  1 +
 .../replication/logical/sequencesync.c        |  3 +
 src/backend/replication/logical/tablesync.c   |  3 +-
 src/backend/replication/logical/worker.c      | 17 ++--
 .../utils/activity/pgstat_subscription.c      | 27 +++++--
 src/backend/utils/adt/pgstatfuncs.c           | 27 ++++---
 src/include/catalog/pg_proc.dat               |  6 +-
 src/include/pgstat.h                          |  6 +-
 src/test/regress/expected/rules.out           |  3 +-
 src/test/subscription/t/026_stats.pl          | 80 ++++++++++++-------
 11 files changed, 122 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index e3523ac882d..1dc0024ab92 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2193,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index dec8df4f8ee..059e8778ca7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1415,6 +1415,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.seq_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index c205a9725cc..86cdcbf0e49 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -709,6 +709,9 @@ start_sequence_sync()
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
 			PG_RE_THROW();
 		}
 	}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e5a2856fd17..dcc6124cc73 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1530,7 +1530,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 8026a007ec3..cb8a5b0f70c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -5607,7 +5607,7 @@ start_apply(XLogRecPtr origin_startpos)
 			 */
 			AbortOutOfAnyTransaction();
 			pgstat_report_subscription_error(MySubscription->oid,
-											 !am_tablesync_worker());
+											 MyLogicalRepWorker->type);
 
 			PG_RE_THROW();
 		}
@@ -5954,15 +5954,12 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	if (am_leader_apply_worker() || am_tablesync_worker())
-	{
-		/*
-		 * Report the worker failed during either table synchronization or
-		 * apply.
-		 */
-		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-										 !am_tablesync_worker());
-	}
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+									 MyLogicalRepWorker->type);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..35916772b9d 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->seq_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(seq_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a710508979e..1521d6e2ab4 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2203,7 +2203,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2221,25 +2221,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "seq_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2256,6 +2258,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* seq_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->seq_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 34b7fddb0e7..5cf9e12fcb9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,seq_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 7ae503e71a2..a0610bb3e31 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -769,7 +772,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 77e25ca029e..fe20f613c3a 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.seq_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, seq_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..23f3511f9a4 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and seq_sync_error_count > 0 and sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,14 +149,17 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset
+# timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -151,8 +167,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Check that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
 # Reset a single subscription
@@ -160,10 +176,12 @@ $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats((SELECT subid FROM pg_stat_subscription_stats WHERE subname = '$sub1_name')))
 );
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -171,8 +189,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
 # Get reset timestamp
@@ -198,14 +216,17 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0
+# and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -213,18 +234,20 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
 # Reset all subscriptions
 $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats(NULL)));
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -232,13 +255,14 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -246,8 +270,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
 $reset_time1 = $node_subscriber->safe_psql($db,
-- 
2.43.0

v20251101-0001-New-worker-for-sequence-synchronization-du.patchapplication/octet-stream; name=v20251101-0001-New-worker-for-sequence-synchronization-du.patchDownload
From 58bc4861d499e461fa1725f1b7fd60290eaf4adc Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 15:31:13 +0530
Subject: [PATCH v20251101] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (The command syntax remains unchanged from PG18 to PG19.)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (The command syntax remains unchanged from PG18 to PG19.)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - (A new command introduced in PG19 by a prior patch.)
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/commands/sequence.c               |  18 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  56 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 729 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 138 +++-
 src/backend/replication/logical/tablesync.c   |  77 +-
 src/backend/replication/logical/worker.c      |  69 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   2 +-
 src/include/catalog/pg_subscription_rel.h     |  14 +
 src/include/commands/sequence.h               |   1 +
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  22 +-
 src/test/subscription/t/036_sequences.pl      | 175 ++++-
 src/tools/pgindent/typedefs.list              |   2 +
 18 files changed, 1201 insertions(+), 116 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 15b233a37d8..1945627ed88 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index c23dee5231c..8d671b7a29d 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,8 +953,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1056,7 +1055,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1064,14 +1063,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1080,7 +1079,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1797,8 +1796,9 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 /*
  * Return the sequence tuple along with its page LSN.
  *
- * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * This is primarily used by pg_dump to efficiently collect sequence data
+ * without querying each sequence individually, and is also leveraged by
+ * logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 95b5cae9a55..2a1d4e03fe2 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -248,9 +248,10 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given worker type,
  * subscription id, and relation id.
  *
- * For apply workers, the relid should be set to InvalidOid, as they manage
- * changes across all tables. For table sync workers, the relid should be set
- * to the OID of the relation being synchronized.
+ * For apply workers and sequencesync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables. For tablesync
+ * workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
 logicalrep_worker_find(LogicalRepWorkerType wtype, Oid subid, Oid relid,
@@ -334,6 +335,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -422,7 +424,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -511,8 +514,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -848,6 +859,34 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ *
+ * Note that this value is not stored in the sequencesync worker, because that
+ * has finished already and is about to exit.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	/*
+	 * Acquire LogicalRepWorkerLock in LW_EXCLUSIVE mode to block the apply
+	 * worker (holding LW_SHARED) from reading or updating
+	 * last_seqsync_start_time. See ProcessSyncingSequencesForApply().
+	 */
+	LWLockAcquire(LogicalRepWorkerLock, LW_EXCLUSIVE);
+
+	worker = logicalrep_worker_find(WORKERTYPE_APPLY,
+									MyLogicalRepWorker->subid, InvalidOid,
+									true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -896,7 +935,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1610,7 +1649,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1650,6 +1689,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..4bf70abcbaf
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,729 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state INIT, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a sequencesync worker
+ * to handle synchronization.
+ *
+ * A single sequencesync worker is responsible for synchronizing all sequences.
+ * It begins by retrieving the list of sequences that are flagged for
+ * synchronization, i.e., those in the INIT state. These sequences are then
+ * processed in batches, allowing multiple entries to be synchronized within a
+ * single transaction. The worker fetches the current sequence values and page
+ * LSNs from the remote publisher, updates the corresponding sequences on the
+ * local subscriber, and finally marks each sequence as READY upon successful
+ * synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: We didn't choose launcher process to maintain the launch of sequencesync
+ * worker as it didn't have database connection to access the sequences from the
+ * pg_subscription_rel system catalog that need to be synchronized.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+#include "utils/pg_lsn.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 10
+
+typedef enum CopySeqResult
+{
+	COPYSEQ_SUCCESS,
+	COPYSEQ_MISMATCH,
+	COPYSEQ_INSUFFICIENT_PERM,
+	COPYSEQ_SKIPPED
+} CopySeqResult;
+
+static List *seqinfos = NIL;
+
+/*
+ * Apply worker determines if sequence synchronization is needed.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(NULL, &has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(WORKERTYPE_SEQUENCESYNC,
+												 MyLogicalRepWorker->subid,
+												 InvalidOid, true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(WORKERTYPE_SEQUENCESYNC, nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * get_sequences_string
+ *
+ * Build a comma-separated string of schema-qualified sequence names
+ * for the given list of sequence indexes.
+ */
+static void
+get_sequences_string(StringInfo buf, List *seqindexes)
+{
+	resetStringInfo(buf);
+	foreach_int(seqidx, seqindexes)
+	{
+		LogicalRepSequenceInfo *seqinfo =
+			(LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);
+
+		if (buf->len > 0)
+			appendStringInfoString(buf, ", ");
+
+		appendStringInfo(buf, "\"%s.%s\"", seqinfo->nspname, seqinfo->seqname);
+	}
+}
+
+/*
+ * report_sequence_errors
+ *
+ * Report discrepancies found during sequence synchronization between
+ * the publisher and subscriber. Emits warnings for:
+ * a) insufficient privileges
+ * b) mismatched definitions or concurrent rename
+ * c) missing sequences on the subscriber
+ * Then raises an ERROR to indicate synchronization failure.
+ */
+static void
+report_sequence_errors(List *insuffperm_seqs, List *mismatched_seqs,
+					   List *missing_seqs)
+{
+	StringInfo	seqstr = makeStringInfo();
+
+	if (insuffperm_seqs)
+	{
+		get_sequences_string(seqstr, insuffperm_seqs);
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_plural("insufficient privileges on sequence (%s)",
+							  "insufficient privileges on sequences  (%s)",
+							  list_length(insuffperm_seqs),
+							  seqstr->data),
+				errhint_plural("Grant UPDATE privilege on the sequence.",
+							   "Grant UPDATE privilege on the sequences.",
+							   list_length(insuffperm_seqs)));
+	}
+
+	if (mismatched_seqs)
+	{
+		get_sequences_string(seqstr, mismatched_seqs);
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_plural("mismatched or renamed sequence on subscriber (%s)",
+							  "mismatched or renamed sequences on subscriber (%s)",
+							  list_length(mismatched_seqs),
+							  seqstr->data),
+				errhint_plural("Alter or re-create the local sequence to match the publisher's definition.",
+							   "Alter or re-create the local sequences to match the publisher's definition.",
+							   list_length(mismatched_seqs)));
+	}
+
+	if (missing_seqs)
+	{
+		get_sequences_string(seqstr, missing_seqs);
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_plural("sequence missing on publisher (%s)",
+							  "sequences missing on publisher (%s)",
+							  list_length(missing_seqs),
+							  seqstr->data),
+				errhint_plural("Remove missing sequence in the local node or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.",
+							   "Remove missing sequences in the local node or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.",
+							   list_length(missing_seqs)));
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"",
+				   MySubscription->name));
+}
+
+/*
+ * get_and_validate_seq_info
+ *
+ * Extracts remote sequence information from the tuple slot received from the
+ * publisher, and validates it against the corresponding local sequence
+ * definition.
+ */
+static CopySeqResult
+get_and_validate_seq_info(TupleTableSlot *slot, Relation *sequence_rel,
+						  LogicalRepSequenceInfo **seqinfo, int *seqidx)
+{
+	bool		isnull;
+	int			col = 0;
+	Oid			remote_typid;
+	int64		remote_start;
+	int64		remote_increment;
+	int64		remote_min;
+	int64		remote_max;
+	bool		remote_cycle;
+	CopySeqResult result = COPYSEQ_SUCCESS;
+	HeapTuple	tup;
+	Form_pg_sequence local_seq;
+	LogicalRepSequenceInfo *seqinfo_local;
+
+	*seqidx = DatumGetInt32(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Identify the corresponding local sequence for the given index. */
+	*seqinfo = seqinfo_local =
+		(LogicalRepSequenceInfo *) list_nth(seqinfos, *seqidx);
+
+	seqinfo_local->last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqinfo_local->is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqinfo_local->page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_typid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_start = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_increment = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_min = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_max = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_cycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	seqinfo_local->found_on_pub = true;
+
+	*sequence_rel = try_table_open(seqinfo_local->localrelid, RowExclusiveLock);
+	/* Sequence was concurrently dropped? */
+	if (!*sequence_rel)
+		return COPYSEQ_SKIPPED;
+
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo_local->localrelid));
+	/* Sequence was concurrently dropped? */
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence %u",
+			 seqinfo_local->localrelid);
+
+	local_seq = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Sequence parameters for remote/local are the same? */
+	if (local_seq->seqtypid != remote_typid ||
+		local_seq->seqstart != remote_start ||
+		local_seq->seqincrement != remote_increment ||
+		local_seq->seqmin != remote_min ||
+		local_seq->seqmax != remote_max ||
+		local_seq->seqcycle != remote_cycle)
+		result = COPYSEQ_MISMATCH;
+
+	/* Sequence was concurrently renamed? */
+	if (strcmp(seqinfo_local->nspname,
+			   get_namespace_name(RelationGetNamespace(*sequence_rel))) ||
+		strcmp(seqinfo_local->seqname, RelationGetRelationName(*sequence_rel)))
+		result = COPYSEQ_MISMATCH;
+
+	ReleaseSysCache(tup);
+	return result;
+}
+
+/*
+ * Apply remote sequence state to local sequence and mark it as
+ * synchronized (READY).
+ */
+static CopySeqResult
+copy_sequence(LogicalRepSequenceInfo *seqinfo, Oid seqowner)
+{
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+	Oid			seqoid = seqinfo->localrelid;
+
+	/*
+	 * If the user did not opt to run as the owner of the subscription
+	 * ('run_as_owner'), then copy the sequence as the owner of the sequence.
+	 */
+	if (!run_as_owner)
+		SwitchToUntrustedUser(seqowner, &ucxt);
+
+	aclresult = pg_class_aclcheck(seqoid, GetUserId(), ACL_UPDATE);
+
+	if (aclresult != ACLCHECK_OK)
+	{
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		return COPYSEQ_INSUFFICIENT_PERM;
+	}
+
+	SetSequence(seqoid, seqinfo->last_value, seqinfo->is_called);
+
+	if (!run_as_owner)
+		RestoreUserContext(&ucxt);
+
+	/*
+	 * Record the remote sequence's LSN in pg_subscription_rel and mark the
+	 * sequence as READY.
+	 */
+	UpdateSubscriptionRelState(MySubscription->oid, seqoid, SUBREL_STATE_READY,
+							   seqinfo->page_lsn, false);
+
+	return COPYSEQ_SUCCESS;
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ */
+static void
+copy_sequences(WalReceiverConn *conn)
+{
+	int			cur_batch_base_index = 0;
+	int			n_seqinfos = list_length(seqinfos);
+	List	   *mismatched_seqs = NIL;
+	List	   *missing_seqs = NIL;
+	List	   *insuffperm_seqs = NIL;
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+	MemoryContext oldctx;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, n_seqinfos));
+
+	while (cur_batch_base_index < n_seqinfos)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+		int			batch_missing_count;
+		Relation	sequence_rel;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		for (int idx = cur_batch_base_index; idx < n_seqinfos; idx++)
+		{
+			LogicalRepSequenceInfo *seqinfo =
+				(LogicalRepSequenceInfo *) list_nth(seqinfos, idx);
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\', %d)",
+							 seqinfo->nspname, seqinfo->seqname, idx);
+
+			if (++batch_size == MAX_SEQUENCES_SYNC_PER_BATCH)
+				break;
+		}
+
+		/*
+		 * We deliberately avoid acquiring a local lock on the sequence before
+		 * querying the publisher to prevent potential distributed deadlocks
+		 * in bi-directional replication setups.
+		 *
+		 * Example scenario:
+		 *
+		 * - On each node, a background worker acquires a lock on a sequence
+		 * as part of a sync operation.
+		 *
+		 * - Concurrently, a user transaction attempts  to alter the same
+		 * sequence, waiting on the background worker's lock.
+		 *
+		 * - Meanwhile, a query from the other node tries to access metadata
+		 * that depends on the completion of the alter operation.
+		 *
+		 * - This creates a circular wait across nodes:
+		 *
+		 * Node-1: Query -> waits on Alter -> waits on Sync Worker
+		 *
+		 * Node-2: Query -> waits on Alter -> waits on Sync Worker
+		 *
+		 * Since each node only sees part of the wait graph, the deadlock may
+		 * go undetected, leading to indefinite blocking.
+		 *
+		 * Note: Each entry in VALUES includes an index 'seqidx' that
+		 * represents the sequence's position in the local 'seqinfos' list.
+		 * This index is propagated to the query results and later used to
+		 * directly map the fetched publisher sequence rows back to their
+		 * corresponding local entries without relying on result order or name
+		 * matching.
+		 */
+		appendStringInfo(cmd,
+						 "SELECT s.seqidx, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname, seqidx)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not fetch sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			CopySeqResult sync_status;
+			LogicalRepSequenceInfo *seqinfo;
+			int			seqidx;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			sync_status = get_and_validate_seq_info(slot, &sequence_rel,
+													&seqinfo, &seqidx);
+			if (sync_status == COPYSEQ_SUCCESS)
+				sync_status = copy_sequence(seqinfo,
+											sequence_rel->rd_rel->relowner);
+
+			switch (sync_status)
+			{
+				case COPYSEQ_MISMATCH:
+
+					/*
+					 * Allocate in a long-lived memory context, since these
+					 * errors will be reported after the transaction commits.
+					 */
+					oldctx = MemoryContextSwitchTo(TopMemoryContext);
+					mismatched_seqs = lappend_int(mismatched_seqs, seqidx);
+					MemoryContextSwitchTo(oldctx);
+					batch_mismatched_count++;
+					break;
+				case COPYSEQ_INSUFFICIENT_PERM:
+
+					/*
+					 * Allocate in a long-lived memory context, since these
+					 * errors will be reported after the transaction commits.
+					 */
+					oldctx = MemoryContextSwitchTo(TopMemoryContext);
+					insuffperm_seqs = lappend_int(insuffperm_seqs, seqidx);
+					MemoryContextSwitchTo(oldctx);
+					batch_insuffperm_count++;
+					break;
+				case COPYSEQ_SKIPPED:
+					ereport(LOG,
+							errmsg("skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+								   seqinfo->nspname,
+								   seqinfo->seqname));
+					batch_skipped_count++;
+					break;
+				case COPYSEQ_SUCCESS:
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+					batch_succeeded_count++;
+					break;
+			}
+
+			if (sequence_rel)
+				table_close(sequence_rel, NoLock);
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		batch_missing_count = batch_size - (batch_succeeded_count +
+											batch_mismatched_count +
+											batch_insuffperm_count +
+											batch_skipped_count);
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+								MySubscription->name,
+								(cur_batch_base_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1,
+								batch_size, batch_succeeded_count, batch_skipped_count,
+								batch_mismatched_count, batch_insuffperm_count,
+								batch_missing_count));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		if (batch_missing_count)
+		{
+			for (int idx = cur_batch_base_index; idx < batch_size; idx++)
+			{
+				LogicalRepSequenceInfo *seqinfo =
+					(LogicalRepSequenceInfo *) list_nth(seqinfos, idx);
+
+				/* If the sequence was not found on publisher, record it */
+				if (!seqinfo->found_on_pub)
+					missing_seqs = lappend_int(missing_seqs, idx);
+			}
+		}
+
+		/*
+		 * cur_batch_base_index is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		cur_batch_base_index += batch_size;
+	}
+
+	/* Report permission issues, mismatches, or missing sequences */
+	if (insuffperm_seqs || mismatched_seqs || missing_seqs)
+		report_sequence_errors(insuffperm_seqs, mismatched_seqs, missing_seqs);
+}
+
+/*
+ * Determines which sequences require synchronization and initiates their
+ * synchronization process.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(SUBREL_STATE_INIT));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		LogicalRepSequenceInfo *seq;
+		Relation	sequence_rel;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+
+		/* Skip if sequence was dropped concurrently */
+		if (!sequence_rel)
+			continue;
+
+		/* Skip if the relation is not a sequence */
+		if (sequence_rel->rd_rel->relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/*
+		 * Worker needs to process sequences across transaction boundary, so
+		 * allocate them under long-lived context.
+		 */
+		oldctx = MemoryContextSwitchTo(TopMemoryContext);
+
+		seq = palloc0_object(LogicalRepSequenceInfo);
+		seq->localrelid = subrel->srrelid;
+		seq->nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+		seq->seqname = pstrdup(RelationGetRelationName(sequence_rel));
+		seqinfos = lappend(seqinfos, seq);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/*
+	 * Exit early if no catalog entries found, likely due to concurrent drops.
+	 */
+	if (!seqinfos)
+		return;
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Note that we don't handle FATAL errors which are probably because of system
+ * resource error and are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker();
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index ae8c9385916..0733de93f5d 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -48,6 +49,8 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
 pg_noreturn void
 FinishSyncWorker(void)
 {
+	Assert(am_sequencesync_worker() || am_tablesync_worker());
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,15 +65,33 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (am_sequencesync_worker())
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
+	else
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
-							 InvalidOid);
+	if (am_sequencesync_worker())
+	{
+		/*
+		 * Find the leader apply worker and reset last_seqsync_start_time. This
+		 * ensures that the apply worker can restart the sequence sync worker
+		 * promptly whenever required.
+		 */
+		logicalrep_reset_seqsync_start_time();
+	}
+	else
+	{
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
+								 InvalidOid);
+	}
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -86,7 +107,52 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * wtype: sync worker type.
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequencesync worker, actual relid for tablesync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers, Oid relid,
+				   TimestampTz *last_start_time)
+{
+	TimestampTz now;
+
+	Assert((wtype == WORKERTYPE_TABLESYNC && OidIsValid(relid)) ||
+		   (wtype == WORKERTYPE_SEQUENCESYNC && !OidIsValid(relid)));
+
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers >= max_sync_workers_per_subscription)
+		return;
+
+	now = GetCurrentTimestamp();
+
+	if (!(*last_start_time) ||
+		TimestampDifferenceExceeds(*last_start_time, now,
+								   wal_retrieve_retry_interval))
+	{
+		/*
+		 * Set the last_start_time even if we fail to start the worker, so
+		 * that we won't retry until wal_retrieve_retry_interval has elapsed.
+		 */
+		*last_start_time = now;
+		(void) logicalrep_worker_launch(wtype,
+										MyLogicalRepWorker->dbid,
+										MySubscription->oid,
+										MySubscription->name,
+										MyLogicalRepWorker->userid,
+										relid, DSM_HANDLE_INVALID, false);
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -108,6 +174,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -117,17 +189,29 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-READY tables and non-READY sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * has_pending_subtables: true if the subscription has one or more tables that
+ * are not in READY state, otherwise false.
+ * has_pending_subsequences: true if the subscription has one or more sequences
+ * that are not in READY state, otherwise false.
  */
-bool
-FetchRelationStates(bool *started_tx)
+void
+FetchRelationStates(bool *has_pending_subtables,
+					bool *has_pending_subsequences,
+					bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready are declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
 	*started_tx = false;
 
@@ -135,10 +219,10 @@ FetchRelationStates(bool *started_tx)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
-		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -150,17 +234,23 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		/* Fetch tables and sequences that are in non-READY state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
+		foreach_ptr(SubscriptionRelState, subrel, rstates)
 		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+			if (get_rel_relkind(subrel->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+			{
+				rstate = palloc(sizeof(SubscriptionRelState));
+				memcpy(rstate, subrel, sizeof(SubscriptionRelState));
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
+			}
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -185,5 +275,9 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
-	return has_subtables;
+	if (has_pending_subtables)
+		*has_pending_subtables = has_subtables;
+
+	if (has_pending_subsequences)
+		*has_pending_subsequences = has_subsequences_non_ready;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 58c98488d7b..e5a2856fd17 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -374,14 +374,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	};
 	static HTAB *last_start_times = NULL;
 	ListCell   *lc;
-	bool		started_tx = false;
+	bool		started_tx;
 	bool		should_exit = false;
 	Relation	rel = NULL;
 
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(NULL, NULL, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -415,6 +415,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -428,11 +436,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -552,43 +555,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(WORKERTYPE_TABLESYNC, nsyncworkers,
+								   rstate->relid, &hentry->last_start_time);
 			}
 		}
 	}
@@ -1432,8 +1411,8 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 	}
 
 	/*
-	 * Make sure that the copy command runs as the table owner, unless the
-	 * user has opted out of that behaviour.
+	 * If the user did not opt to run as the owner of the subscription
+	 * ('run_as_owner'), then copy the table as the owner of the table.
 	 */
 	run_as_owner = MySubscription->runasowner;
 	if (!run_as_owner)
@@ -1596,7 +1575,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1618,11 +1597,11 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		started_tx;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_tables, NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1634,7 +1613,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1649,10 +1628,10 @@ bool
 HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
-	bool		has_subrels;
+	bool		has_tables;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_tables, NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1660,7 +1639,7 @@ HasSubscriptionTablesCached(void)
 		pgstat_report_stat(true);
 	}
 
-	return has_subrels;
+	return has_tables;
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 7edd1c9cf06..8026a007ec3 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1243,7 +1248,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1365,7 +1373,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1421,7 +1432,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1487,7 +1501,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1622,7 +1639,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2465,7 +2485,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4137,7 +4160,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5580,7 +5606,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 !am_tablesync_worker());
 
 			PG_RE_THROW();
 		}
@@ -5700,8 +5727,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5812,6 +5839,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5831,14 +5862,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5921,9 +5954,15 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	if (am_leader_apply_worker() || am_tablesync_worker())
+	{
+		/*
+		 * Report the worker failed during either table synchronization or
+		 * apply.
+		 */
+		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+										 !am_tablesync_worker());
+	}
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9121a382f76..34b7fddb0e7 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9f88498ecd3..5bd3fd5d475 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,20 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+/*
+ * Stores metadata about a sequence involved in logical replication.
+ */
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	XLogRecPtr  page_lsn;
+	int64		last_value;
+	bool		is_called;
+	bool		found_on_pub;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..46b4d89dd6e 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index e23fa9a4514..32ef365f4a6 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -264,6 +267,8 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
+extern void launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers,
+							   Oid relid, TimestampTz *last_start_time);
 extern void logicalrep_worker_stop(LogicalRepWorkerType wtype, Oid subid,
 								   Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
@@ -272,6 +277,7 @@ extern void logicalrep_worker_wakeup(LogicalRepWorkerType wtype, Oid subid,
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -282,11 +288,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
 pg_noreturn extern void FinishSyncWorker(void);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern void FetchRelationStates(bool *has_pending_subtables,
+								bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -353,13 +361,21 @@ extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index 557fc91c017..fbd819dfc02 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -1,7 +1,7 @@
 
 # Copyright (c) 2025, PostgreSQL Global Development Group
 
-# This tests that sequences are registered to be synced to the subscriber
+# This tests that sequences are synced correctly to the subscriber
 use strict;
 use warnings;
 use PostgreSQL::Test::Cluster;
@@ -14,6 +14,7 @@ my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
 # Avoid checkpoint during the test, otherwise, extra values will be fetched for
 # the sequences which will cause the test to fail randomly.
 $node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
 $node_publisher->start;
 
 # Initialize subscriber node
@@ -28,7 +29,14 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -46,10 +54,165 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
-my $result = $node_subscriber->safe_psql('postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION;
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence ('regress_s1') is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'REFRESH PUBLICATION will not sync existing sequence');
+
+# Check - newly published sequence ('regress_s2') is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# Test: REFRESH SEQUENCES and REFRESH PUBLICATION (copy_data = off)
+#
+# 1. ALTER SUBSCRIPTION ... REFRESH SEQUENCES should re-synchronize all
+#    existing sequences, but not synchronize newly added ones.
+# 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+#    also not update sequence values for newly added sequences.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# 1. Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES;
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences ('regress_s1' and 'regress_s2') are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence ('regress_s3') is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+# 2. Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data as false
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION WITH (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence ('regress_s3') is not synced when
+# (copy_data = off).
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
 );
-is($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should report an error when:
+# a) sequence definitions differ between the publisher and subscriber, or
+# b) a sequence is missing on the publisher.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql('postgres',
+	"ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION");
+
+# Verify that an error is logged for parameter differences on sequence
+# ('regress_s4').
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? mismatched or renamed sequence on subscriber \("public.regress_s4"\)\n.*HINT:.* Alter or re-create the local sequence to match the publisher's definition/,
+	$log_offset);
+
+# Verify that an error is logged for the missing sequence ('regress_s4').
+$node_publisher->safe_psql('postgres', qq(DROP SEQUENCE regress_s4;));
+
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? sequence missing on publisher \("public.regress_s4"\)\n.*HINT:.* Remove missing sequence in the local node or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 018b5919cf6..2ca7b75af57 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -526,6 +526,7 @@ CopyMethod
 CopyMultiInsertBuffer
 CopyMultiInsertInfo
 CopyOnErrorChoice
+CopySeqResult
 CopySource
 CopyStmt
 CopyToRoutine
@@ -1629,6 +1630,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251101-0002-Documentation-for-sequence-synchronization.patchapplication/octet-stream; name=v20251101-0002-Documentation-for-sequence-synchronization.patchDownload
From d0a932af47b6af22d281a48e05ac6f6be3cec59d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:18:07 +0530
Subject: [PATCH v20251101 2/3] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  16 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 230 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 282 insertions(+), 29 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 06d1e4403b5..926590c53ae 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5358,10 +5358,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..86c778fc1f4 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the state of
+   sequences can be synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,200 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   on the subscriber:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An error is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to retry until
+    all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+test_pub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+CREATE SEQUENCE
+test_pub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+test_sub=# CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+CREATE SEQUENCE
+test_sub=# CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+CREATE SEQUENCE
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+test_pub=# CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+CREATE PUBLICATION
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+test_sub=# CREATE SUBSCRIPTION sub1
+test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+test_sub-# PUBLICATION pub1;
+CREATE SUBSCRIPTION
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+test_pub=# SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+test_pub=# SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all the sequences on the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+test_sub=# ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+ALTER SUBSCRIPTION
+
+test_sub=# SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+test_sub=# SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2286,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2622,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table synchronization workers, and a sequence
+    synchronization worker.
    </para>
 
    <para>
@@ -2437,8 +2636,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index f3bf527d5b4..e3523ac882d 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..05bf2f2f49f 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter is not applicable for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter is not
+          applicable for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter is not applicable for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter is not applicable for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

#461vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#459)
3 attachment(s)
Re: Logical Replication of sequences

On Fri, 31 Oct 2025 at 11:26, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

For later.... here are some review comments for the documentation
patch v20251030-0002

======
doc/src/sgml/config.sgml

wal_retrieve_retry_interval:

1.
<para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, table synchronization worker, or
+        sequence synchronization worker will be respawned.
</para>

I think you can simplify that.

SUGGESTION
In logical replication, this parameter also limits how quickly a
failing replication apply worker, or table/sequence synchronization
worker will be respawned.

Modified

~~~

max_logical_replication_workers:

2.
<para>
Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, table synchronization
+        workers and a sequence synchronization worker.
</para>

I think you can simplify that.

SUGGESTION
This includes leader apply workers, parallel apply workers, and
table/sequence synchronization workers.

Modified

~~~

max_sync_workers_per_subscription:

3.
<para>
Maximum number of synchronization workers per subscription. This
parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        during the subscription initialization or when new tables or sequences
+        are added.
</para>

But, there is no parallelism at all for sequence copies, because there
is only one sequencesync worker (as the following docs paragraph
says), so maybe we do not need this docs change.

NOTE -- see the comment #12 below, and maybe use wording like that.

Modified similarly

======
doc/src/sgml/logical-replication.sgml

Section 29.1 Publication:

4.
Publications may currently only contain tables or sequences. Objects must be
added explicitly, except when a publication is created using
<literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, the state of
+   sequences can be synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.

Not sure about the wording "the state of". Maybe it can be simplified?

SUGGESTION
Unlike tables, sequences can be synchronized at any time.

Modified

~~~

5.
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences.
+     </para>
+    </listitem>

AFAIK it's not going to get any newly added sequences so it is not
really "all sequences" so this seems misleading. I thought it should
be like below.

SUGGESTION
use ALTER SUBSCRIPTION ... REFRESH SEQUENCES to re-synchronize all
sequences currently known to the subscription.

Modified

~~~

Section 29.7.1. Sequence Definition Mismatches:

6.
+   <para>
+    During sequence synchronization, the sequence definitions of the publisher
+    and the subscriber are compared. An error is logged listing all differing
+    sequences before the process exits. The apply worker detects this failure
+    and repeatedly respawns the sequence synchronization worker to retry until
+    all differences are resolved. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>

It seems a bit misleading. e.g. AFAIK the "The apply worker detects
this failure" is not true. IIUC, the apply worker simply finds some
sequences that still have INIT state, so really it has no knowledge of
failure at all, right?

Consider rewording this part.

SUGGESTION
The sequence synchronization worker validates that sequence
definitions match between publisher and subscriber. If mismatches
exist, the worker logs an error identifying them and exits. The apply
worker continues respawning the sequence synchronization worker until
synchronization succeeds.

Modified

~~~

Section 29.7.2. Refreshing Stale Sequences:

7.
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To verify, compare the sequence values between the publisher and
+    subscriber, and if necessary, execute
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+   </para>

I didn't see why the wording "To verify" was needed here. Below is a
slightly simpler alternative for these paragraphs.

SUGGESTION
Subscriber sequence values drift out of sync as the publisher advances
them. Compare values between publisher and subscriber, then run ALTER
SUBSCRIPTION ... REFRESH SEQUENCES to resynchronize if necessary.

Modified

~~~

Section 29.7.3. Examples.

8. GENERAL. Prompts in examples

I think using prompts like "test_pub#" in the examples is frowned upon
because it makes cutting directly from the examples more difficult.
Similarly, the result of the commands is not shown.

See other PG18 logical replication examples for why the current style
is.... e.g. more like this:
/* pub # */ CREATE TABLE t1(a int, b int, c text, PRIMARY KEY(a,c));
/* pub # */ CREATE TABLE t2(d int, e int, f int, PRIMARY KEY(d));

Modified

~~~

9.
+    Re-synchronize all the sequences on the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.

SUGGESTION
Re-synchronize all sequences known to the subscriber using...

Modified

~~~

Section 29.9. Restrictions #

10.
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.

IIUC the "ALTER SUBSCRIPTION ... REFRESH SEQUENCES" is only going to
resync the sequences that the subscriber already knew about. So, if
you really wanted to get the latest of everything won't the user need
to execute double-commands just in case there are some new sequences
at the publisher?

e.g.
First, ALTER SUBSCRIPTION REFRESH PUBLICATION
Then, ALTER SUBSCRIPTION REFRESH SEQUENCES

I felt we don't support DDL, so new one's created should be copied
using pg_dump. I felt existing is ok.

~~~

Section 29.13.2. Subscribers #

11.
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, table
synchronization workers, and a sequence
+    synchronization worker.

I think this can be worded similar to the config.sgml

SUGGESTION
... plus some reserve for the parallel apply workers, and
table/sequence synchronization workers.

Modified

~~

12.
<para>
<link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
</para>

Oh, perhaps this is the wording that should have been used in
config.sgml (review comment #3) to avoid implying about sequencesync
workers helping with parallelism.

Used the sequence synchronization doc similar to here in #3

======
doc/src/sgml/monitoring.sgml

13.
<para>
Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
</para></entry>

This docs fragment probably belongs with the pg_stats patch, not here.

This is ok here as we display this for running process, the other
stats patch is mainly for errors.

======
doc/src/sgml/ref/create_subscription.sgml

14.
(see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter is not applicable for sequences.
</para>

In many places for this page the patch says "This parameter is not
applicable for sequences."

IMO that is ambiguous. It is not clear if the parameter is silently
ignored, if it will give an error, or what?

Maybe you can clarify by saying "is ignored" or "has no effect",
instead of "is not applicable"

Modified

The attached v20251102 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20251102-0001-New-worker-for-sequence-synchronization-du.patchtext/x-patch; charset=UTF-8; name=v20251102-0001-New-worker-for-sequence-synchronization-du.patchDownload
From 9f15a8ac07960e6484def932e25b32f36171ffd6 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 15:31:13 +0530
Subject: [PATCH v20251102 1/3] New worker for sequence synchronization during
 subscription management

This patch introduces sequence synchronization:
Sequences have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
It does the following:
    a) Retrieves remote values of sequences with pg_sequence_state() INIT.
    b) Logs a warning if the sequence parameters differ between the publisher and subscriber.
    c) Sets the local sequence values accordingly.
    d) Updates the local sequence state to READY.
    e) Repeats until all done; Commits synchronized sequences in batches of 100

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - (The command syntax remains unchanged from PG18 to PG19.)
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - (The command syntax remains unchanged from PG18 to PG19.)
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT state.
    - Initiate the sequencesync worker (see above) to synchronize only
      newly added sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - (A new command introduced in PG19 by a prior patch.)
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker (see above) to synchronize all
      sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/commands/sequence.c               |  18 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  56 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 729 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 138 +++-
 src/backend/replication/logical/tablesync.c   |  77 +-
 src/backend/replication/logical/worker.c      |  69 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   2 +-
 src/include/catalog/pg_subscription_rel.h     |  14 +
 src/include/commands/sequence.h               |   1 +
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  22 +-
 src/test/subscription/t/036_sequences.pl      | 175 ++++-
 src/tools/pgindent/typedefs.list              |   2 +
 18 files changed, 1201 insertions(+), 116 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 15b233a37d8..1945627ed88 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index c23dee5231c..8d671b7a29d 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,8 +953,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1056,7 +1055,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1064,14 +1063,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1080,7 +1079,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1797,8 +1796,9 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 /*
  * Return the sequence tuple along with its page LSN.
  *
- * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * This is primarily used by pg_dump to efficiently collect sequence data
+ * without querying each sequence individually, and is also leveraged by
+ * logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 95b5cae9a55..2a1d4e03fe2 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -248,9 +248,10 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given worker type,
  * subscription id, and relation id.
  *
- * For apply workers, the relid should be set to InvalidOid, as they manage
- * changes across all tables. For table sync workers, the relid should be set
- * to the OID of the relation being synchronized.
+ * For apply workers and sequencesync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables. For tablesync
+ * workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
 logicalrep_worker_find(LogicalRepWorkerType wtype, Oid subid, Oid relid,
@@ -334,6 +335,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -422,7 +424,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -511,8 +514,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -848,6 +859,34 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ *
+ * Note that this value is not stored in the sequencesync worker, because that
+ * has finished already and is about to exit.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	/*
+	 * Acquire LogicalRepWorkerLock in LW_EXCLUSIVE mode to block the apply
+	 * worker (holding LW_SHARED) from reading or updating
+	 * last_seqsync_start_time. See ProcessSyncingSequencesForApply().
+	 */
+	LWLockAcquire(LogicalRepWorkerLock, LW_EXCLUSIVE);
+
+	worker = logicalrep_worker_find(WORKERTYPE_APPLY,
+									MyLogicalRepWorker->subid, InvalidOid,
+									true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -896,7 +935,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1610,7 +1649,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1650,6 +1689,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..4bf70abcbaf
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,729 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state INIT, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a sequencesync worker
+ * to handle synchronization.
+ *
+ * A single sequencesync worker is responsible for synchronizing all sequences.
+ * It begins by retrieving the list of sequences that are flagged for
+ * synchronization, i.e., those in the INIT state. These sequences are then
+ * processed in batches, allowing multiple entries to be synchronized within a
+ * single transaction. The worker fetches the current sequence values and page
+ * LSNs from the remote publisher, updates the corresponding sequences on the
+ * local subscriber, and finally marks each sequence as READY upon successful
+ * synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: We didn't choose launcher process to maintain the launch of sequencesync
+ * worker as it didn't have database connection to access the sequences from the
+ * pg_subscription_rel system catalog that need to be synchronized.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+#include "utils/pg_lsn.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 10
+
+typedef enum CopySeqResult
+{
+	COPYSEQ_SUCCESS,
+	COPYSEQ_MISMATCH,
+	COPYSEQ_INSUFFICIENT_PERM,
+	COPYSEQ_SKIPPED
+} CopySeqResult;
+
+static List *seqinfos = NIL;
+
+/*
+ * Apply worker determines if sequence synchronization is needed.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSyncingSequencesForApply(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(NULL, &has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(WORKERTYPE_SEQUENCESYNC,
+												 MyLogicalRepWorker->subid,
+												 InvalidOid, true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	launch_sync_worker(WORKERTYPE_SEQUENCESYNC, nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * get_sequences_string
+ *
+ * Build a comma-separated string of schema-qualified sequence names
+ * for the given list of sequence indexes.
+ */
+static void
+get_sequences_string(StringInfo buf, List *seqindexes)
+{
+	resetStringInfo(buf);
+	foreach_int(seqidx, seqindexes)
+	{
+		LogicalRepSequenceInfo *seqinfo =
+			(LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);
+
+		if (buf->len > 0)
+			appendStringInfoString(buf, ", ");
+
+		appendStringInfo(buf, "\"%s.%s\"", seqinfo->nspname, seqinfo->seqname);
+	}
+}
+
+/*
+ * report_sequence_errors
+ *
+ * Report discrepancies found during sequence synchronization between
+ * the publisher and subscriber. Emits warnings for:
+ * a) insufficient privileges
+ * b) mismatched definitions or concurrent rename
+ * c) missing sequences on the subscriber
+ * Then raises an ERROR to indicate synchronization failure.
+ */
+static void
+report_sequence_errors(List *insuffperm_seqs, List *mismatched_seqs,
+					   List *missing_seqs)
+{
+	StringInfo	seqstr = makeStringInfo();
+
+	if (insuffperm_seqs)
+	{
+		get_sequences_string(seqstr, insuffperm_seqs);
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_plural("insufficient privileges on sequence (%s)",
+							  "insufficient privileges on sequences  (%s)",
+							  list_length(insuffperm_seqs),
+							  seqstr->data),
+				errhint_plural("Grant UPDATE privilege on the sequence.",
+							   "Grant UPDATE privilege on the sequences.",
+							   list_length(insuffperm_seqs)));
+	}
+
+	if (mismatched_seqs)
+	{
+		get_sequences_string(seqstr, mismatched_seqs);
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_plural("mismatched or renamed sequence on subscriber (%s)",
+							  "mismatched or renamed sequences on subscriber (%s)",
+							  list_length(mismatched_seqs),
+							  seqstr->data),
+				errhint_plural("Alter or re-create the local sequence to match the publisher's definition.",
+							   "Alter or re-create the local sequences to match the publisher's definition.",
+							   list_length(mismatched_seqs)));
+	}
+
+	if (missing_seqs)
+	{
+		get_sequences_string(seqstr, missing_seqs);
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_plural("sequence missing on publisher (%s)",
+							  "sequences missing on publisher (%s)",
+							  list_length(missing_seqs),
+							  seqstr->data),
+				errhint_plural("Remove missing sequence in the local node or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.",
+							   "Remove missing sequences in the local node or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription.",
+							   list_length(missing_seqs)));
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"",
+				   MySubscription->name));
+}
+
+/*
+ * get_and_validate_seq_info
+ *
+ * Extracts remote sequence information from the tuple slot received from the
+ * publisher, and validates it against the corresponding local sequence
+ * definition.
+ */
+static CopySeqResult
+get_and_validate_seq_info(TupleTableSlot *slot, Relation *sequence_rel,
+						  LogicalRepSequenceInfo **seqinfo, int *seqidx)
+{
+	bool		isnull;
+	int			col = 0;
+	Oid			remote_typid;
+	int64		remote_start;
+	int64		remote_increment;
+	int64		remote_min;
+	int64		remote_max;
+	bool		remote_cycle;
+	CopySeqResult result = COPYSEQ_SUCCESS;
+	HeapTuple	tup;
+	Form_pg_sequence local_seq;
+	LogicalRepSequenceInfo *seqinfo_local;
+
+	*seqidx = DatumGetInt32(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Identify the corresponding local sequence for the given index. */
+	*seqinfo = seqinfo_local =
+		(LogicalRepSequenceInfo *) list_nth(seqinfos, *seqidx);
+
+	seqinfo_local->last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqinfo_local->is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqinfo_local->page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_typid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_start = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_increment = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_min = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_max = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_cycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	seqinfo_local->found_on_pub = true;
+
+	*sequence_rel = try_table_open(seqinfo_local->localrelid, RowExclusiveLock);
+	/* Sequence was concurrently dropped? */
+	if (!*sequence_rel)
+		return COPYSEQ_SKIPPED;
+
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo_local->localrelid));
+	/* Sequence was concurrently dropped? */
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence %u",
+			 seqinfo_local->localrelid);
+
+	local_seq = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Sequence parameters for remote/local are the same? */
+	if (local_seq->seqtypid != remote_typid ||
+		local_seq->seqstart != remote_start ||
+		local_seq->seqincrement != remote_increment ||
+		local_seq->seqmin != remote_min ||
+		local_seq->seqmax != remote_max ||
+		local_seq->seqcycle != remote_cycle)
+		result = COPYSEQ_MISMATCH;
+
+	/* Sequence was concurrently renamed? */
+	if (strcmp(seqinfo_local->nspname,
+			   get_namespace_name(RelationGetNamespace(*sequence_rel))) ||
+		strcmp(seqinfo_local->seqname, RelationGetRelationName(*sequence_rel)))
+		result = COPYSEQ_MISMATCH;
+
+	ReleaseSysCache(tup);
+	return result;
+}
+
+/*
+ * Apply remote sequence state to local sequence and mark it as
+ * synchronized (READY).
+ */
+static CopySeqResult
+copy_sequence(LogicalRepSequenceInfo *seqinfo, Oid seqowner)
+{
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+	Oid			seqoid = seqinfo->localrelid;
+
+	/*
+	 * If the user did not opt to run as the owner of the subscription
+	 * ('run_as_owner'), then copy the sequence as the owner of the sequence.
+	 */
+	if (!run_as_owner)
+		SwitchToUntrustedUser(seqowner, &ucxt);
+
+	aclresult = pg_class_aclcheck(seqoid, GetUserId(), ACL_UPDATE);
+
+	if (aclresult != ACLCHECK_OK)
+	{
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		return COPYSEQ_INSUFFICIENT_PERM;
+	}
+
+	SetSequence(seqoid, seqinfo->last_value, seqinfo->is_called);
+
+	if (!run_as_owner)
+		RestoreUserContext(&ucxt);
+
+	/*
+	 * Record the remote sequence's LSN in pg_subscription_rel and mark the
+	 * sequence as READY.
+	 */
+	UpdateSubscriptionRelState(MySubscription->oid, seqoid, SUBREL_STATE_READY,
+							   seqinfo->page_lsn, false);
+
+	return COPYSEQ_SUCCESS;
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ */
+static void
+copy_sequences(WalReceiverConn *conn)
+{
+	int			cur_batch_base_index = 0;
+	int			n_seqinfos = list_length(seqinfos);
+	List	   *mismatched_seqs = NIL;
+	List	   *missing_seqs = NIL;
+	List	   *insuffperm_seqs = NIL;
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+	MemoryContext oldctx;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, n_seqinfos));
+
+	while (cur_batch_base_index < n_seqinfos)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+		int			batch_missing_count;
+		Relation	sequence_rel;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		for (int idx = cur_batch_base_index; idx < n_seqinfos; idx++)
+		{
+			LogicalRepSequenceInfo *seqinfo =
+				(LogicalRepSequenceInfo *) list_nth(seqinfos, idx);
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\', %d)",
+							 seqinfo->nspname, seqinfo->seqname, idx);
+
+			if (++batch_size == MAX_SEQUENCES_SYNC_PER_BATCH)
+				break;
+		}
+
+		/*
+		 * We deliberately avoid acquiring a local lock on the sequence before
+		 * querying the publisher to prevent potential distributed deadlocks
+		 * in bi-directional replication setups.
+		 *
+		 * Example scenario:
+		 *
+		 * - On each node, a background worker acquires a lock on a sequence
+		 * as part of a sync operation.
+		 *
+		 * - Concurrently, a user transaction attempts  to alter the same
+		 * sequence, waiting on the background worker's lock.
+		 *
+		 * - Meanwhile, a query from the other node tries to access metadata
+		 * that depends on the completion of the alter operation.
+		 *
+		 * - This creates a circular wait across nodes:
+		 *
+		 * Node-1: Query -> waits on Alter -> waits on Sync Worker
+		 *
+		 * Node-2: Query -> waits on Alter -> waits on Sync Worker
+		 *
+		 * Since each node only sees part of the wait graph, the deadlock may
+		 * go undetected, leading to indefinite blocking.
+		 *
+		 * Note: Each entry in VALUES includes an index 'seqidx' that
+		 * represents the sequence's position in the local 'seqinfos' list.
+		 * This index is propagated to the query results and later used to
+		 * directly map the fetched publisher sequence rows back to their
+		 * corresponding local entries without relying on result order or name
+		 * matching.
+		 */
+		appendStringInfo(cmd,
+						 "SELECT s.seqidx, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname, seqidx)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not fetch sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			CopySeqResult sync_status;
+			LogicalRepSequenceInfo *seqinfo;
+			int			seqidx;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			sync_status = get_and_validate_seq_info(slot, &sequence_rel,
+													&seqinfo, &seqidx);
+			if (sync_status == COPYSEQ_SUCCESS)
+				sync_status = copy_sequence(seqinfo,
+											sequence_rel->rd_rel->relowner);
+
+			switch (sync_status)
+			{
+				case COPYSEQ_MISMATCH:
+
+					/*
+					 * Allocate in a long-lived memory context, since these
+					 * errors will be reported after the transaction commits.
+					 */
+					oldctx = MemoryContextSwitchTo(TopMemoryContext);
+					mismatched_seqs = lappend_int(mismatched_seqs, seqidx);
+					MemoryContextSwitchTo(oldctx);
+					batch_mismatched_count++;
+					break;
+				case COPYSEQ_INSUFFICIENT_PERM:
+
+					/*
+					 * Allocate in a long-lived memory context, since these
+					 * errors will be reported after the transaction commits.
+					 */
+					oldctx = MemoryContextSwitchTo(TopMemoryContext);
+					insuffperm_seqs = lappend_int(insuffperm_seqs, seqidx);
+					MemoryContextSwitchTo(oldctx);
+					batch_insuffperm_count++;
+					break;
+				case COPYSEQ_SKIPPED:
+					ereport(LOG,
+							errmsg("skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+								   seqinfo->nspname,
+								   seqinfo->seqname));
+					batch_skipped_count++;
+					break;
+				case COPYSEQ_SUCCESS:
+					ereport(DEBUG1,
+							errmsg_internal("logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+											MySubscription->name,
+											seqinfo->nspname,
+											seqinfo->seqname));
+					batch_succeeded_count++;
+					break;
+			}
+
+			if (sequence_rel)
+				table_close(sequence_rel, NoLock);
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		batch_missing_count = batch_size - (batch_succeeded_count +
+											batch_mismatched_count +
+											batch_insuffperm_count +
+											batch_skipped_count);
+
+		ereport(DEBUG1,
+				errmsg_internal("logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+								MySubscription->name,
+								(cur_batch_base_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1,
+								batch_size, batch_succeeded_count, batch_skipped_count,
+								batch_mismatched_count, batch_insuffperm_count,
+								batch_missing_count));
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		if (batch_missing_count)
+		{
+			for (int idx = cur_batch_base_index; idx < batch_size; idx++)
+			{
+				LogicalRepSequenceInfo *seqinfo =
+					(LogicalRepSequenceInfo *) list_nth(seqinfos, idx);
+
+				/* If the sequence was not found on publisher, record it */
+				if (!seqinfo->found_on_pub)
+					missing_seqs = lappend_int(missing_seqs, idx);
+			}
+		}
+
+		/*
+		 * cur_batch_base_index is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		cur_batch_base_index += batch_size;
+	}
+
+	/* Report permission issues, mismatches, or missing sequences */
+	if (insuffperm_seqs || mismatched_seqs || missing_seqs)
+		report_sequence_errors(insuffperm_seqs, mismatched_seqs, missing_seqs);
+}
+
+/*
+ * Determines which sequences require synchronization and initiates their
+ * synchronization process.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(SUBREL_STATE_INIT));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		LogicalRepSequenceInfo *seq;
+		Relation	sequence_rel;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+
+		/* Skip if sequence was dropped concurrently */
+		if (!sequence_rel)
+			continue;
+
+		/* Skip if the relation is not a sequence */
+		if (sequence_rel->rd_rel->relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/*
+		 * Worker needs to process sequences across transaction boundary, so
+		 * allocate them under long-lived context.
+		 */
+		oldctx = MemoryContextSwitchTo(TopMemoryContext);
+
+		seq = palloc0_object(LogicalRepSequenceInfo);
+		seq->localrelid = subrel->srrelid;
+		seq->nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+		seq->seqname = pstrdup(RelationGetRelationName(sequence_rel));
+		seqinfos = lappend(seqinfos, seq);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/*
+	 * Exit early if no catalog entries found, likely due to concurrent drops.
+	 */
+	if (!seqinfos)
+		return;
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Note that we don't handle FATAL errors which are probably because of system
+ * resource error and are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker();
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index ae8c9385916..f8b1a3d4827 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -48,6 +49,8 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
 pg_noreturn void
 FinishSyncWorker(void)
 {
+	Assert(am_sequencesync_worker() || am_tablesync_worker());
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -62,15 +65,33 @@ FinishSyncWorker(void)
 	XLogFlush(GetXLogWriteRecPtr());
 
 	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
+	if (am_sequencesync_worker())
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
+	else
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+
 	CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
-							 InvalidOid);
+	if (am_sequencesync_worker())
+	{
+		/*
+		 * Find the leader apply worker and reset last_seqsync_start_time.
+		 * This ensures that the apply worker can restart the sequence sync
+		 * worker promptly whenever required.
+		 */
+		logicalrep_reset_seqsync_start_time();
+	}
+	else
+	{
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
+								 InvalidOid);
+	}
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -86,7 +107,52 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker (sequence or table) if there is a sync
+ * worker slot available and the retry interval has elapsed.
+ *
+ * wtype: sync worker type.
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequencesync worker, actual relid for tablesync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers, Oid relid,
+				   TimestampTz *last_start_time)
+{
+	TimestampTz now;
+
+	Assert((wtype == WORKERTYPE_TABLESYNC && OidIsValid(relid)) ||
+		   (wtype == WORKERTYPE_SEQUENCESYNC && !OidIsValid(relid)));
+
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers >= max_sync_workers_per_subscription)
+		return;
+
+	now = GetCurrentTimestamp();
+
+	if (!(*last_start_time) ||
+		TimestampDifferenceExceeds(*last_start_time, now,
+								   wal_retrieve_retry_interval))
+	{
+		/*
+		 * Set the last_start_time even if we fail to start the worker, so
+		 * that we won't retry until wal_retrieve_retry_interval has elapsed.
+		 */
+		*last_start_time = now;
+		(void) logicalrep_worker_launch(wtype,
+										MyLogicalRepWorker->dbid,
+										MySubscription->oid,
+										MySubscription->name,
+										MyLogicalRepWorker->userid,
+										relid, DSM_HANDLE_INVALID, false);
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -108,6 +174,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSyncingSequencesForApply();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -117,17 +189,29 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-READY tables and non-READY sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * has_pending_subtables: true if the subscription has one or more tables that
+ * are not in READY state, otherwise false.
+ * has_pending_subsequences: true if the subscription has one or more sequences
+ * that are not in READY state, otherwise false.
  */
-bool
-FetchRelationStates(bool *started_tx)
+void
+FetchRelationStates(bool *has_pending_subtables,
+					bool *has_pending_subsequences,
+					bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready are declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
 	*started_tx = false;
 
@@ -135,10 +219,10 @@ FetchRelationStates(bool *started_tx)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
-		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -150,17 +234,23 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		/* Fetch tables and sequences that are in non-READY state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
+		foreach_ptr(SubscriptionRelState, subrel, rstates)
 		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+			if (get_rel_relkind(subrel->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+			{
+				rstate = palloc(sizeof(SubscriptionRelState));
+				memcpy(rstate, subrel, sizeof(SubscriptionRelState));
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
+			}
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -185,5 +275,9 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
-	return has_subtables;
+	if (has_pending_subtables)
+		*has_pending_subtables = has_subtables;
+
+	if (has_pending_subsequences)
+		*has_pending_subsequences = has_subsequences_non_ready;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 58c98488d7b..e5a2856fd17 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -374,14 +374,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	};
 	static HTAB *last_start_times = NULL;
 	ListCell   *lc;
-	bool		started_tx = false;
+	bool		started_tx;
 	bool		should_exit = false;
 	Relation	rel = NULL;
 
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(NULL, NULL, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -415,6 +415,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -428,11 +436,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -552,43 +555,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(WORKERTYPE_TABLESYNC, nsyncworkers,
+								   rstate->relid, &hentry->last_start_time);
 			}
 		}
 	}
@@ -1432,8 +1411,8 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 	}
 
 	/*
-	 * Make sure that the copy command runs as the table owner, unless the
-	 * user has opted out of that behaviour.
+	 * If the user did not opt to run as the owner of the subscription
+	 * ('run_as_owner'), then copy the table as the owner of the table.
 	 */
 	run_as_owner = MySubscription->runasowner;
 	if (!run_as_owner)
@@ -1596,7 +1575,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1618,11 +1597,11 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		started_tx;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_tables, NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1634,7 +1613,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1649,10 +1628,10 @@ bool
 HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
-	bool		has_subrels;
+	bool		has_tables;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_tables, NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1660,7 +1639,7 @@ HasSubscriptionTablesCached(void)
 		pgstat_report_stat(true);
 	}
 
-	return has_subrels;
+	return has_tables;
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 7edd1c9cf06..8026a007ec3 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1243,7 +1248,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1365,7 +1373,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1421,7 +1432,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1487,7 +1501,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1622,7 +1639,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2465,7 +2485,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4137,7 +4160,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5580,7 +5606,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 !am_tablesync_worker());
 
 			PG_RE_THROW();
 		}
@@ -5700,8 +5727,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5812,6 +5839,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5831,14 +5862,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5921,9 +5954,15 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	if (am_leader_apply_worker() || am_tablesync_worker())
+	{
+		/*
+		 * Report the worker failed during either table synchronization or
+		 * apply.
+		 */
+		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+										 !am_tablesync_worker());
+	}
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9121a382f76..34b7fddb0e7 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9f88498ecd3..cf52b47dfc6 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,20 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+/*
+ * Stores metadata about a sequence involved in logical replication.
+ */
+typedef struct LogicalRepSequenceInfo
+{
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+	XLogRecPtr	page_lsn;
+	int64		last_value;
+	bool		is_called;
+	bool		found_on_pub;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..46b4d89dd6e 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index e23fa9a4514..32ef365f4a6 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -264,6 +267,8 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
+extern void launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers,
+							   Oid relid, TimestampTz *last_start_time);
 extern void logicalrep_worker_stop(LogicalRepWorkerType wtype, Oid subid,
 								   Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
@@ -272,6 +277,7 @@ extern void logicalrep_worker_wakeup(LogicalRepWorkerType wtype, Oid subid,
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
 extern int	logicalrep_sync_worker_count(Oid subid);
+extern void logicalrep_reset_seqsync_start_time(void);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
 											   char *originname, Size szoriginname);
@@ -282,11 +288,13 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSyncingSequencesForApply(void);
 
 pg_noreturn extern void FinishSyncWorker(void);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern void FetchRelationStates(bool *has_pending_subtables,
+								bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -353,13 +361,21 @@ extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index 557fc91c017..fbd819dfc02 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -1,7 +1,7 @@
 
 # Copyright (c) 2025, PostgreSQL Global Development Group
 
-# This tests that sequences are registered to be synced to the subscriber
+# This tests that sequences are synced correctly to the subscriber
 use strict;
 use warnings;
 use PostgreSQL::Test::Cluster;
@@ -14,6 +14,7 @@ my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
 # Avoid checkpoint during the test, otherwise, extra values will be fetched for
 # the sequences which will cause the test to fail randomly.
 $node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
 $node_publisher->start;
 
 # Initialize subscriber node
@@ -28,7 +29,14 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -46,10 +54,165 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
-my $result = $node_subscriber->safe_psql('postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION;
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence ('regress_s1') is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'REFRESH PUBLICATION will not sync existing sequence');
+
+# Check - newly published sequence ('regress_s2') is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# Test: REFRESH SEQUENCES and REFRESH PUBLICATION (copy_data = off)
+#
+# 1. ALTER SUBSCRIPTION ... REFRESH SEQUENCES should re-synchronize all
+#    existing sequences, but not synchronize newly added ones.
+# 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+#    also not update sequence values for newly added sequences.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# 1. Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES;
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences ('regress_s1' and 'regress_s2') are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence ('regress_s3') is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+# 2. Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data as false
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION WITH (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence ('regress_s3') is not synced when
+# (copy_data = off).
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
 );
-is($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should report an error when:
+# a) sequence definitions differ between the publisher and subscriber, or
+# b) a sequence is missing on the publisher.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql('postgres',
+	"ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION");
+
+# Verify that an error is logged for parameter differences on sequence
+# ('regress_s4').
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? mismatched or renamed sequence on subscriber \("public.regress_s4"\)\n.*HINT:.* Alter or re-create the local sequence to match the publisher's definition/,
+	$log_offset);
+
+# Verify that an error is logged for the missing sequence ('regress_s4').
+$node_publisher->safe_psql('postgres', qq(DROP SEQUENCE regress_s4;));
+
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? sequence missing on publisher \("public.regress_s4"\)\n.*HINT:.* Remove missing sequence in the local node or run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to refresh the subscription/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 018b5919cf6..2ca7b75af57 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -526,6 +526,7 @@ CopyMethod
 CopyMultiInsertBuffer
 CopyMultiInsertInfo
 CopyOnErrorChoice
+CopySeqResult
 CopySource
 CopyStmt
 CopyToRoutine
@@ -1629,6 +1630,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251102-0002-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20251102-0002-Documentation-for-sequence-synchronization.patchDownload
From bb25bfabf22e293a54e7325a77257fe4d3c499b3 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:18:07 +0530
Subject: [PATCH v20251102 2/3] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  14 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 224 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 275 insertions(+), 28 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 06d1e4403b5..f5cbc68b938 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, or table/sequence synchronization
+        worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, and  table/sequence
+        synchronization workers.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5359,9 +5359,11 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
         during the subscription initialization or when new tables are added.
+        One additional worker is also needed for sequence synchronization.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..9e78c2f0465 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, sequences can be
+   synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,194 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   on the subscriber:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences currently known to the subscription.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    The sequence synchronization worker validates that sequence definitions
+    match between publisher and subscriber. If mismatches exist, the worker
+    logs an error identifying them and exits. The apply worker continues
+    respawning the sequence synchronization worker until synchronization
+    succeeds. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    Subscriber sequence values drift out of sync as the publisher advances
+    them. Compare values between publisher and subscriber, then run
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link> to
+    resynchronize if necessary.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+/* pub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+/* pub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+/* sub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+/* sub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+/* pub # */ CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+/* sub # */ CREATE SUBSCRIPTION sub1
+/* sub - */ CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+/* sub - */ PUBLICATION pub1;
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+/* sub # */ SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+/* sub # */ SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all sequences known to the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+/* sub # */ ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+
+/* sub # */ SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+/* sub # */ SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2280,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2616,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, and
+    table/sequence synchronization workers.
    </para>
 
    <para>
@@ -2437,8 +2630,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index f3bf527d5b4..e3523ac882d 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..d2ca3165f8a 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter has no effect for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter has no effect for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter has no effect
+          for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter has no effect for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

v20251102-0003-Add-seq_sync_error_count-to-subscription-s.patchtext/x-patch; charset=US-ASCII; name=v20251102-0003-Add-seq_sync_error_count-to-subscription-s.patchDownload
From 2bf7ec1f91e44c643e68d01261c004e6e2b1dd6e Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 30 Oct 2025 21:01:47 +0530
Subject: [PATCH v20251102 3/3] Add seq_sync_error_count to subscription
 statistics.

This commit introduces a new column seq_sync_error_count to subscription
statistics. The new field tracks the number of errors encountered during
sequence synchronization for each subscription.
---
 doc/src/sgml/monitoring.sgml                  |  9 +++
 src/backend/catalog/system_views.sql          |  1 +
 .../replication/logical/sequencesync.c        |  3 +
 src/backend/replication/logical/tablesync.c   |  3 +-
 src/backend/replication/logical/worker.c      | 17 ++--
 .../utils/activity/pgstat_subscription.c      | 27 +++++--
 src/backend/utils/adt/pgstatfuncs.c           | 27 ++++---
 src/include/catalog/pg_proc.dat               |  6 +-
 src/include/pgstat.h                          |  6 +-
 src/test/regress/expected/rules.out           |  3 +-
 src/test/subscription/t/026_stats.pl          | 80 ++++++++++++-------
 11 files changed, 122 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index e3523ac882d..1dc0024ab92 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2193,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index dec8df4f8ee..059e8778ca7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1415,6 +1415,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.seq_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 4bf70abcbaf..2299f87aef4 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -709,6 +709,9 @@ start_sequence_sync()
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
 			PG_RE_THROW();
 		}
 	}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e5a2856fd17..dcc6124cc73 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1530,7 +1530,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 8026a007ec3..cb8a5b0f70c 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -5607,7 +5607,7 @@ start_apply(XLogRecPtr origin_startpos)
 			 */
 			AbortOutOfAnyTransaction();
 			pgstat_report_subscription_error(MySubscription->oid,
-											 !am_tablesync_worker());
+											 MyLogicalRepWorker->type);
 
 			PG_RE_THROW();
 		}
@@ -5954,15 +5954,12 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	if (am_leader_apply_worker() || am_tablesync_worker())
-	{
-		/*
-		 * Report the worker failed during either table synchronization or
-		 * apply.
-		 */
-		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-										 !am_tablesync_worker());
-	}
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+									 MyLogicalRepWorker->type);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..35916772b9d 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->seq_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(seq_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a710508979e..1521d6e2ab4 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2203,7 +2203,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2221,25 +2221,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "seq_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2256,6 +2258,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* seq_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->seq_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 34b7fddb0e7..5cf9e12fcb9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,seq_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 7ae503e71a2..a0610bb3e31 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -769,7 +772,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 77e25ca029e..fe20f613c3a 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.seq_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, seq_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..23f3511f9a4 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and seq_sync_error_count > 0 and sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,14 +149,17 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset
+# timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -151,8 +167,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Check that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
 # Reset a single subscription
@@ -160,10 +176,12 @@ $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats((SELECT subid FROM pg_stat_subscription_stats WHERE subname = '$sub1_name')))
 );
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -171,8 +189,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
 # Get reset timestamp
@@ -198,14 +216,17 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0
+# and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -213,18 +234,20 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
 # Reset all subscriptions
 $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats(NULL)));
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -232,13 +255,14 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -246,8 +270,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
 $reset_time1 = $node_subscriber->safe_psql($db,
-- 
2.43.0

#462Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#461)
1 attachment(s)
Re: Logical Replication of sequences

On Mon, Nov 3, 2025 at 12:35 AM vignesh C <vignesh21@gmail.com> wrote:

Some inor comments on 0001.
1.
+ /*
+ * Acquire LogicalRepWorkerLock in LW_EXCLUSIVE mode to block the apply
+ * worker (holding LW_SHARED) from reading or updating
+ * last_seqsync_start_time. See ProcessSyncingSequencesForApply().
+ */
+ LWLockAcquire(LogicalRepWorkerLock, LW_EXCLUSIVE);

Is it required to have LW_EXCLUSIVE lock here? In the function
ProcessSyncingSequencesForApply(), apply_worker access/update
last_seqsync_start_time only once it ensures that sequence sync worker
has exited. I have made changes related to this in the attached to
show you what I have in mind.

2.
+ /*
+ * Worker needs to process sequences across transaction boundary, so
+ * allocate them under long-lived context.
+ */
+ oldctx = MemoryContextSwitchTo(TopMemoryContext);
+
+ seq = palloc0_object(LogicalRepSequenceInfo);
…
...
+ /*
+ * Allocate in a long-lived memory context, since these
+ * errors will be reported after the transaction commits.
+ */
+ oldctx = MemoryContextSwitchTo(TopMemoryContext);
+ mismatched_seqs = lappend_int(mismatched_seqs, seqidx);

At the above and other places in syncworker, we don't need to use
TopMemoryContext; rather, we can use ApplyContext allocated via
SequenceSyncWorkerMain()->SetupApplyOrSyncWorker()->InitializeLogRepWorker().

3.
ProcessSyncingTablesForApply(current_lsn);
+ ProcessSyncingSequencesForApply();

I am not sure if the function name ProcessSyncingSequencesForApply is
appropriate. For tables, we do some work for concurrently running
tablesync workers and launch new as well but for sequences, we don't
do any work for sequences that are already being synced. How about
ProcessSequencesForSync()?

4.
+      /* Should never happen. */
+      elog(ERROR, "Sequence synchronization worker not expected to
process relations");

The first letter of the ERROR message should be small. How about:
"sequence synchronization worker is not expected to process
relations"? I have made this change in the attached.

5.
@@ -5580,7 +5606,8 @@ start_apply(XLogRecPtr origin_startpos)
* idle state.
*/
AbortOutOfAnyTransaction();
- pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+ pgstat_report_subscription_error(MySubscription->oid,
+ !am_tablesync_worker());

Why this change?

6.
@@ -264,6 +267,8 @@ extern bool
logicalrep_worker_launch(LogicalRepWorkerType wtype,
Oid userid, Oid relid,
dsm_handle subworker_dsm,
bool retain_dead_tuples);
+extern void launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers,
+    Oid relid, TimestampTz *last_start_time);
extern void logicalrep_worker_stop(LogicalRepWorkerType wtype, Oid subid,
   Oid relid);
All the other functions except the newly added one are from
launcher.c. So, this one should be after those, no? It should be after
the InvalidateSyncingRelStates() declaration.

Apart from above, please find attached top-up patch to improve
comments and some other cosmetic stuff. The 0001 patch looks good to
me apart from the above minor points.

--
With Regards,
Amit Kapila.

Attachments:

v1_amit.patch.txttext/plain; charset=US-ASCII; name=v1_amit.patch.txtDownload
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 2a1d4e03fe2..ff9bc02a3df 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -872,11 +872,11 @@ logicalrep_reset_seqsync_start_time(void)
 	LogicalRepWorker *worker;
 
 	/*
-	 * Acquire LogicalRepWorkerLock in LW_EXCLUSIVE mode to block the apply
-	 * worker (holding LW_SHARED) from reading or updating
-	 * last_seqsync_start_time. See ProcessSyncingSequencesForApply().
+	 * The apply worker can't access last_seqsync_start_time concurrently, so
+	 * it is okay to use SHARED lock here. See
+	 * ProcessSyncingSequencesForApply().
 	 */
-	LWLockAcquire(LogicalRepWorkerLock, LW_EXCLUSIVE);
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
 
 	worker = logicalrep_worker_find(WORKERTYPE_APPLY,
 									MyLogicalRepWorker->subid, InvalidOid,
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 4bf70abcbaf..7f8afc1fec8 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -127,6 +127,10 @@ ProcessSyncingSequencesForApply(void)
 	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
 	LWLockRelease(LogicalRepWorkerLock);
 
+	/*
+	 * It is okay to read/update last_seqsync_start_time here in apply worker
+	 * as we have already ensured that sync worker doesn't exist.
+	 */
 	launch_sync_worker(WORKERTYPE_SEQUENCESYNC, nsyncworkers, InvalidOid,
 					   &MyLogicalRepWorker->last_seqsync_start_time);
 }
@@ -419,7 +423,7 @@ copy_sequences(WalReceiverConn *conn)
 		 * - On each node, a background worker acquires a lock on a sequence
 		 * as part of a sync operation.
 		 *
-		 * - Concurrently, a user transaction attempts  to alter the same
+		 * - Concurrently, a user transaction attempts to alter the same
 		 * sequence, waiting on the background worker's lock.
 		 *
 		 * - Meanwhile, a query from the other node tries to access metadata
@@ -485,8 +489,9 @@ copy_sequences(WalReceiverConn *conn)
 				case COPYSEQ_MISMATCH:
 
 					/*
-					 * Allocate in a long-lived memory context, since these
-					 * errors will be reported after the transaction commits.
+					 * Remember mismatched sequences in a long-lived memory
+					 * context, since these will be used after the transaction
+					 * commits.
 					 */
 					oldctx = MemoryContextSwitchTo(TopMemoryContext);
 					mismatched_seqs = lappend_int(mismatched_seqs, seqidx);
@@ -496,8 +501,9 @@ copy_sequences(WalReceiverConn *conn)
 				case COPYSEQ_INSUFFICIENT_PERM:
 
 					/*
-					 * Allocate in a long-lived memory context, since these
-					 * errors will be reported after the transaction commits.
+					 * Remember the sequences with insufficient privileges in a
+					 * long-lived memory context, since these will be used after
+					 * the transaction commits.
 					 */
 					oldctx = MemoryContextSwitchTo(TopMemoryContext);
 					insuffperm_seqs = lappend_int(insuffperm_seqs, seqidx);
@@ -573,7 +579,7 @@ copy_sequences(WalReceiverConn *conn)
 }
 
 /*
- * Determines which sequences require synchronization and initiates their
+ * Identifies sequences that require synchronization and initiates the
  * synchronization process.
  */
 static void
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index f8b1a3d4827..53530bb39b6 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -107,8 +107,8 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Attempt to launch a sync worker (sequence or table) if there is a sync
- * worker slot available and the retry interval has elapsed.
+ * Attempt to launch a sync worker for one or more sequences or a table, if
+ * a worker slot is available and the retry interval has elapsed.
  *
  * wtype: sync worker type.
  * nsyncworkers: Number of currently running sync workers for the subscription.
@@ -179,7 +179,7 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_SEQUENCESYNC:
 			/* Should never happen. */
-			elog(ERROR, "Sequence synchronization worker not expected to process relations");
+			elog(ERROR, "sequence synchronization worker is not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 8026a007ec3..bc8b5a5cb69 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -704,7 +704,7 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 
 		case WORKERTYPE_SEQUENCESYNC:
 			/* Should never happen. */
-			elog(ERROR, "Sequence synchronization worker not expected to apply changes");
+			elog(ERROR, "sequence synchronization worker is not expected to apply changes");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index 32ef365f4a6..87fb211d040 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -267,8 +267,6 @@ extern bool logicalrep_worker_launch(LogicalRepWorkerType wtype,
 									 Oid userid, Oid relid,
 									 dsm_handle subworker_dsm,
 									 bool retain_dead_tuples);
-extern void launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers,
-							   Oid relid, TimestampTz *last_start_time);
 extern void logicalrep_worker_stop(LogicalRepWorkerType wtype, Oid subid,
 								   Oid relid);
 extern void logicalrep_pa_worker_stop(ParallelApplyWorkerInfo *winfo);
@@ -292,6 +290,8 @@ extern void ProcessSyncingSequencesForApply(void);
 
 pg_noreturn extern void FinishSyncWorker(void);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers,
+							   Oid relid, TimestampTz* last_start_time);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
 extern void FetchRelationStates(bool *has_pending_subtables,
 								bool *has_pending_sequences, bool *started_tx);
#463vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#462)
3 attachment(s)
Re: Logical Replication of sequences

On Mon, 3 Nov 2025 at 16:44, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Nov 3, 2025 at 12:35 AM vignesh C <vignesh21@gmail.com> wrote:

Some inor comments on 0001.
1.
+ /*
+ * Acquire LogicalRepWorkerLock in LW_EXCLUSIVE mode to block the apply
+ * worker (holding LW_SHARED) from reading or updating
+ * last_seqsync_start_time. See ProcessSyncingSequencesForApply().
+ */
+ LWLockAcquire(LogicalRepWorkerLock, LW_EXCLUSIVE);

Is it required to have LW_EXCLUSIVE lock here? In the function
ProcessSyncingSequencesForApply(), apply_worker access/update
last_seqsync_start_time only once it ensures that sequence sync worker
has exited. I have made changes related to this in the attached to
show you what I have in mind.

Modified

2.
+ /*
+ * Worker needs to process sequences across transaction boundary, so
+ * allocate them under long-lived context.
+ */
+ oldctx = MemoryContextSwitchTo(TopMemoryContext);
+
+ seq = palloc0_object(LogicalRepSequenceInfo);
…
...
+ /*
+ * Allocate in a long-lived memory context, since these
+ * errors will be reported after the transaction commits.
+ */
+ oldctx = MemoryContextSwitchTo(TopMemoryContext);
+ mismatched_seqs = lappend_int(mismatched_seqs, seqidx);

At the above and other places in syncworker, we don't need to use
TopMemoryContext; rather, we can use ApplyContext allocated via
SequenceSyncWorkerMain()->SetupApplyOrSyncWorker()->InitializeLogRepWorker().

Modified

3.
ProcessSyncingTablesForApply(current_lsn);
+ ProcessSyncingSequencesForApply();

I am not sure if the function name ProcessSyncingSequencesForApply is
appropriate. For tables, we do some work for concurrently running
tablesync workers and launch new as well but for sequences, we don't
do any work for sequences that are already being synced. How about
ProcessSequencesForSync()?

Changed it to ProcessSequencesForSync

4.
+      /* Should never happen. */
+      elog(ERROR, "Sequence synchronization worker not expected to
process relations");

The first letter of the ERROR message should be small. How about:
"sequence synchronization worker is not expected to process
relations"? I have made this change in the attached.

Modified

5.
@@ -5580,7 +5606,8 @@ start_apply(XLogRecPtr origin_startpos)
* idle state.
*/
AbortOutOfAnyTransaction();
- pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+ pgstat_report_subscription_error(MySubscription->oid,
+ !am_tablesync_worker());

Why this change?

This is not required, removed this change

6.
@@ -264,6 +267,8 @@ extern bool
logicalrep_worker_launch(LogicalRepWorkerType wtype,
Oid userid, Oid relid,
dsm_handle subworker_dsm,
bool retain_dead_tuples);
+extern void launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers,
+    Oid relid, TimestampTz *last_start_time);
extern void logicalrep_worker_stop(LogicalRepWorkerType wtype, Oid subid,
Oid relid);
All the other functions except the newly added one are from
launcher.c. So, this one should be after those, no? It should be after
the InvalidateSyncingRelStates() declaration.

Modified

Apart from above, please find attached top-up patch to improve
comments and some other cosmetic stuff.

Thanks, I have merged them.

The attached v20251103 patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20251103-0001-Add-sequence-synchronization-for-logical-r.patchtext/x-patch; charset=UTF-8; name=v20251103-0001-Add-sequence-synchronization-for-logical-r.patchDownload
From 2171a8c6623e2b8dd1e8ca9cef70cdbccd4a9422 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 15:31:13 +0530
Subject: [PATCH v20251103 1/3] Add sequence synchronization for logical
 replication.

This patch introduces sequence synchronization. Sequences that are synced
will have 2 states:
   - INIT (needs [re]synchronizing)
   - READY (is already synchronized)

A new sequencesync worker is launched as needed to synchronize sequences.
A single sequencesync worker is responsible for synchronizing all
sequences. It begins by retrieving the list of sequences that are flagged
for synchronization, i.e., those in the INIT state. These sequences are
then processed in batches, allowing multiple entries to be synchronized
within a single transaction. The worker fetches the current sequence
values and page LSNs from the remote publisher, updates the corresponding
sequences on the local subscriber, and finally marks each sequence as
READY upon successful synchronization.

Sequence synchronization occurs in 3 places:
1) CREATE SUBSCRIPTION
    - The command syntax remains unchanged.
    - The subscriber retrieves sequences associated with publications.
    - Published sequences are added to pg_subscription_rel with INIT
      state.
    - Initiate the sequencesync worker to synchronize all sequences.

2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION
    - The command syntax remains unchanged.
    - Dropped published sequences are removed from pg_subscription_rel.
    - Newly published sequences are added to pg_subscription_rel with INIT
      state.
    - Initiate the sequencesync worker to synchronize only newly added
      sequences.

3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES
    - A new command introduced for PG19 by f0b3573c3a.
    - All sequences in pg_subscription_rel are reset to INIT state.
    - Initiate the sequencesync worker to synchronize all sequences.
    - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command,
      addition and removal of missing sequences will not be done in this
      case.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
---
 src/backend/catalog/pg_subscription.c         |   2 +-
 src/backend/commands/sequence.c               |  18 +-
 src/backend/postmaster/bgworker.c             |   5 +-
 src/backend/replication/logical/Makefile      |   1 +
 src/backend/replication/logical/launcher.c    |  56 +-
 src/backend/replication/logical/meson.build   |   1 +
 .../replication/logical/sequencesync.c        | 730 ++++++++++++++++++
 src/backend/replication/logical/syncutils.c   | 140 +++-
 src/backend/replication/logical/tablesync.c   |  77 +-
 src/backend/replication/logical/worker.c      |  66 +-
 src/backend/utils/misc/guc_parameters.dat     |   2 +-
 src/include/catalog/pg_proc.dat               |   2 +-
 src/include/catalog/pg_subscription_rel.h     |  23 +
 src/include/commands/sequence.h               |   1 +
 src/include/replication/logicalworker.h       |   3 +-
 src/include/replication/worker_internal.h     |  22 +-
 src/test/subscription/t/036_sequences.pl      | 175 ++++-
 src/tools/pgindent/typedefs.list              |   2 +
 18 files changed, 1209 insertions(+), 117 deletions(-)
 create mode 100644 src/backend/replication/logical/sequencesync.c

diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 15b233a37d8..1945627ed88 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -354,7 +354,7 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
 							  ObjectIdGetDatum(relid),
 							  ObjectIdGetDatum(subid));
 	if (!HeapTupleIsValid(tup))
-		elog(ERROR, "subscription table %u in subscription %u does not exist",
+		elog(ERROR, "subscription relation %u in subscription %u does not exist",
 			 relid, subid);
 
 	/* Update the tuple. */
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index c23dee5231c..8d671b7a29d 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -112,7 +112,6 @@ static void init_params(ParseState *pstate, List *options, bool for_identity,
 						bool *is_called,
 						bool *need_seq_rewrite,
 						List **owned_by);
-static void do_setval(Oid relid, int64 next, bool iscalled);
 static void process_owned_by(Relation seqrel, List *owned_by, bool for_identity);
 
 
@@ -954,8 +953,8 @@ lastval(PG_FUNCTION_ARGS)
  * it is the only way to clear the is_called flag in an existing
  * sequence.
  */
-static void
-do_setval(Oid relid, int64 next, bool iscalled)
+void
+SetSequence(Oid relid, int64 next, bool iscalled)
 {
 	SeqTable	elm;
 	Relation	seqrel;
@@ -1056,7 +1055,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
 
 /*
  * Implement the 2 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval_oid(PG_FUNCTION_ARGS)
@@ -1064,14 +1063,14 @@ setval_oid(PG_FUNCTION_ARGS)
 	Oid			relid = PG_GETARG_OID(0);
 	int64		next = PG_GETARG_INT64(1);
 
-	do_setval(relid, next, true);
+	SetSequence(relid, next, true);
 
 	PG_RETURN_INT64(next);
 }
 
 /*
  * Implement the 3 arg setval procedure.
- * See do_setval for discussion.
+ * See SetSequence for discussion.
  */
 Datum
 setval3_oid(PG_FUNCTION_ARGS)
@@ -1080,7 +1079,7 @@ setval3_oid(PG_FUNCTION_ARGS)
 	int64		next = PG_GETARG_INT64(1);
 	bool		iscalled = PG_GETARG_BOOL(2);
 
-	do_setval(relid, next, iscalled);
+	SetSequence(relid, next, iscalled);
 
 	PG_RETURN_INT64(next);
 }
@@ -1797,8 +1796,9 @@ pg_sequence_parameters(PG_FUNCTION_ARGS)
 /*
  * Return the sequence tuple along with its page LSN.
  *
- * This is primarily intended for use by pg_dump to gather sequence data
- * without needing to individually query each sequence relation.
+ * This is primarily used by pg_dump to efficiently collect sequence data
+ * without querying each sequence individually, and is also leveraged by
+ * logical replication while synchronizing sequences.
  */
 Datum
 pg_get_sequence_data(PG_FUNCTION_ARGS)
diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index 1ad65c237c3..142a02eb5e9 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -131,7 +131,10 @@ static const struct
 		"ParallelApplyWorkerMain", ParallelApplyWorkerMain
 	},
 	{
-		"TablesyncWorkerMain", TablesyncWorkerMain
+		"TableSyncWorkerMain", TableSyncWorkerMain
+	},
+	{
+		"SequenceSyncWorkerMain", SequenceSyncWorkerMain
 	}
 };
 
diff --git a/src/backend/replication/logical/Makefile b/src/backend/replication/logical/Makefile
index c62c8c67521..c719af1f8a9 100644
--- a/src/backend/replication/logical/Makefile
+++ b/src/backend/replication/logical/Makefile
@@ -26,6 +26,7 @@ OBJS = \
 	proto.o \
 	relation.o \
 	reorderbuffer.o \
+	sequencesync.o \
 	slotsync.o \
 	snapbuild.o \
 	syncutils.o \
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index 95b5cae9a55..bb49ad17dc7 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -248,9 +248,10 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,
  * Walks the workers array and searches for one that matches given worker type,
  * subscription id, and relation id.
  *
- * For apply workers, the relid should be set to InvalidOid, as they manage
- * changes across all tables. For table sync workers, the relid should be set
- * to the OID of the relation being synchronized.
+ * For apply workers and sequencesync workers, the relid should be set to
+ * InvalidOid, as they manage changes across all tables. For tablesync
+ * workers, the relid should be set to the OID of the relation being
+ * synchronized.
  */
 LogicalRepWorker *
 logicalrep_worker_find(LogicalRepWorkerType wtype, Oid subid, Oid relid,
@@ -334,6 +335,7 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype,
 	int			nparallelapplyworkers;
 	TimestampTz now;
 	bool		is_tablesync_worker = (wtype == WORKERTYPE_TABLESYNC);
+	bool		is_sequencesync_worker = (wtype == WORKERTYPE_SEQUENCESYNC);
 	bool		is_parallel_apply_worker = (wtype == WORKERTYPE_PARALLEL_APPLY);
 
 	/*----------
@@ -422,7 +424,8 @@ retry:
 	 * sync worker limit per subscription. So, just return silently as we
 	 * might get here because of an otherwise harmless race condition.
 	 */
-	if (is_tablesync_worker && nsyncworkers >= max_sync_workers_per_subscription)
+	if ((is_tablesync_worker || is_sequencesync_worker) &&
+		nsyncworkers >= max_sync_workers_per_subscription)
 	{
 		LWLockRelease(LogicalRepWorkerLock);
 		return false;
@@ -478,6 +481,7 @@ retry:
 	TIMESTAMP_NOBEGIN(worker->last_recv_time);
 	worker->reply_lsn = InvalidXLogRecPtr;
 	TIMESTAMP_NOBEGIN(worker->reply_time);
+	worker->last_seqsync_start_time = 0;
 
 	/* Before releasing lock, remember generation for future identification. */
 	generation = worker->generation;
@@ -511,8 +515,16 @@ retry:
 			memcpy(bgw.bgw_extra, &subworker_dsm, sizeof(dsm_handle));
 			break;
 
+		case WORKERTYPE_SEQUENCESYNC:
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "SequenceSyncWorkerMain");
+			snprintf(bgw.bgw_name, BGW_MAXLEN,
+					 "logical replication sequencesync worker for subscription %u",
+					 subid);
+			snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication sequencesync worker");
+			break;
+
 		case WORKERTYPE_TABLESYNC:
-			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TablesyncWorkerMain");
+			snprintf(bgw.bgw_function_name, BGW_MAXLEN, "TableSyncWorkerMain");
 			snprintf(bgw.bgw_name, BGW_MAXLEN,
 					 "logical replication tablesync worker for subscription %u sync %u",
 					 subid,
@@ -848,6 +860,33 @@ logicalrep_launcher_onexit(int code, Datum arg)
 	LogicalRepCtx->launcher_pid = 0;
 }
 
+/*
+ * Reset the last_seqsync_start_time of the sequencesync worker in the
+ * subscription's apply worker.
+ *
+ * Note that this value is not stored in the sequencesync worker, because that
+ * has finished already and is about to exit.
+ */
+void
+logicalrep_reset_seqsync_start_time(void)
+{
+	LogicalRepWorker *worker;
+
+	/*
+	 * The apply worker can't access last_seqsync_start_time concurrently, so
+	 * it is okay to use SHARED lock here. See ProcessSequencesForSync().
+	 */
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	worker = logicalrep_worker_find(WORKERTYPE_APPLY,
+									MyLogicalRepWorker->subid, InvalidOid,
+									true);
+	if (worker)
+		worker->last_seqsync_start_time = 0;
+
+	LWLockRelease(LogicalRepWorkerLock);
+}
+
 /*
  * Cleanup function.
  *
@@ -896,7 +935,7 @@ logicalrep_sync_worker_count(Oid subid)
 	{
 		LogicalRepWorker *w = &LogicalRepCtx->workers[i];
 
-		if (isTablesyncWorker(w) && w->subid == subid)
+		if (w->subid == subid && (isTableSyncWorker(w) || isSequenceSyncWorker(w)))
 			res++;
 	}
 
@@ -1610,7 +1649,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 		worker_pid = worker.proc->pid;
 
 		values[0] = ObjectIdGetDatum(worker.subid);
-		if (isTablesyncWorker(&worker))
+		if (isTableSyncWorker(&worker))
 			values[1] = ObjectIdGetDatum(worker.relid);
 		else
 			nulls[1] = true;
@@ -1650,6 +1689,9 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)
 			case WORKERTYPE_PARALLEL_APPLY:
 				values[9] = CStringGetTextDatum("parallel apply");
 				break;
+			case WORKERTYPE_SEQUENCESYNC:
+				values[9] = CStringGetTextDatum("sequence synchronization");
+				break;
 			case WORKERTYPE_TABLESYNC:
 				values[9] = CStringGetTextDatum("table synchronization");
 				break;
diff --git a/src/backend/replication/logical/meson.build b/src/backend/replication/logical/meson.build
index 9283e996ef4..a2268d8361e 100644
--- a/src/backend/replication/logical/meson.build
+++ b/src/backend/replication/logical/meson.build
@@ -12,6 +12,7 @@ backend_sources += files(
   'proto.c',
   'relation.c',
   'reorderbuffer.c',
+  'sequencesync.c',
   'slotsync.c',
   'snapbuild.c',
   'syncutils.c',
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
new file mode 100644
index 00000000000..623311b55ab
--- /dev/null
+++ b/src/backend/replication/logical/sequencesync.c
@@ -0,0 +1,730 @@
+/*-------------------------------------------------------------------------
+ * sequencesync.c
+ *	  PostgreSQL logical replication: sequence synchronization
+ *
+ * Copyright (c) 2025, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/backend/replication/logical/sequencesync.c
+ *
+ * NOTES
+ *	  This file contains code for sequence synchronization for
+ *	  logical replication.
+ *
+ * Sequences requiring synchronization are tracked in the pg_subscription_rel
+ * catalog.
+ *
+ * Sequences to be synchronized will be added with state INIT when either of
+ * the following commands is executed:
+ * CREATE SUBSCRIPTION
+ * ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+ *
+ * Executing the following command resets all sequences in the subscription to
+ * state INIT, triggering re-synchronization:
+ * ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+ *
+ * The apply worker periodically scans pg_subscription_rel for sequences in
+ * INIT state. When such sequences are found, it spawns a sequencesync worker
+ * to handle synchronization.
+ *
+ * A single sequencesync worker is responsible for synchronizing all sequences.
+ * It begins by retrieving the list of sequences that are flagged for
+ * synchronization, i.e., those in the INIT state. These sequences are then
+ * processed in batches, allowing multiple entries to be synchronized within a
+ * single transaction. The worker fetches the current sequence values and page
+ * LSNs from the remote publisher, updates the corresponding sequences on the
+ * local subscriber, and finally marks each sequence as READY upon successful
+ * synchronization.
+ *
+ * Sequence state transitions follow this pattern:
+ *   INIT → READY
+ *
+ * To avoid creating too many transactions, up to MAX_SEQUENCES_SYNC_PER_BATCH
+ * sequences are synchronized per transaction. The locks on the sequence
+ * relation will be periodically released at each transaction commit.
+ *
+ * XXX: We didn't choose launcher process to maintain the launch of sequencesync
+ * worker as it didn't have database connection to access the sequences from the
+ * pg_subscription_rel system catalog that need to be synchronized.
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/table.h"
+#include "catalog/pg_sequence.h"
+#include "catalog/pg_subscription_rel.h"
+#include "commands/sequence.h"
+#include "pgstat.h"
+#include "postmaster/interrupt.h"
+#include "replication/logicalworker.h"
+#include "replication/worker_internal.h"
+#include "utils/acl.h"
+#include "utils/fmgroids.h"
+#include "utils/guc.h"
+#include "utils/inval.h"
+#include "utils/lsyscache.h"
+#include "utils/memutils.h"
+#include "utils/pg_lsn.h"
+#include "utils/syscache.h"
+#include "utils/usercontext.h"
+
+#define REMOTE_SEQ_COL_COUNT 10
+
+typedef enum CopySeqResult
+{
+	COPYSEQ_SUCCESS,
+	COPYSEQ_MISMATCH,
+	COPYSEQ_INSUFFICIENT_PERM,
+	COPYSEQ_SKIPPED
+} CopySeqResult;
+
+static List *seqinfos = NIL;
+
+/*
+ * Apply worker determines if sequence synchronization is needed.
+ *
+ * Start a sequencesync worker if one is not already running. The active
+ * sequencesync worker will handle all pending sequence synchronization. If any
+ * sequences remain unsynchronized after it exits, a new worker can be started
+ * in the next iteration.
+ */
+void
+ProcessSequencesForSync(void)
+{
+	LogicalRepWorker *sequencesync_worker;
+	int			nsyncworkers;
+	bool		has_pending_sequences;
+	bool		started_tx;
+
+	FetchRelationStates(NULL, &has_pending_sequences, &started_tx);
+
+	if (started_tx)
+	{
+		CommitTransactionCommand();
+		pgstat_report_stat(true);
+	}
+
+	if (!has_pending_sequences)
+		return;
+
+	LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
+
+	/* Check if there is a sequencesync worker already running? */
+	sequencesync_worker = logicalrep_worker_find(WORKERTYPE_SEQUENCESYNC,
+												 MyLogicalRepWorker->subid,
+												 InvalidOid, true);
+	if (sequencesync_worker)
+	{
+		LWLockRelease(LogicalRepWorkerLock);
+		return;
+	}
+
+	/*
+	 * Count running sync workers for this subscription, while we have the
+	 * lock.
+	 */
+	nsyncworkers = logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+	LWLockRelease(LogicalRepWorkerLock);
+
+	/*
+	 * It is okay to read/update last_seqsync_start_time here in apply worker
+	 * as we have already ensured that sync worker doesn't exist.
+	 */
+	launch_sync_worker(WORKERTYPE_SEQUENCESYNC, nsyncworkers, InvalidOid,
+					   &MyLogicalRepWorker->last_seqsync_start_time);
+}
+
+/*
+ * get_sequences_string
+ *
+ * Build a comma-separated string of schema-qualified sequence names
+ * for the given list of sequence indexes.
+ */
+static void
+get_sequences_string(List *seqindexes, StringInfo buf)
+{
+	resetStringInfo(buf);
+	foreach_int(seqidx, seqindexes)
+	{
+		LogicalRepSequenceInfo *seqinfo =
+			(LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);
+
+		if (buf->len > 0)
+			appendStringInfoString(buf, ", ");
+
+		appendStringInfo(buf, "\"%s.%s\"", seqinfo->nspname, seqinfo->seqname);
+	}
+}
+
+/*
+ * report_sequence_errors
+ *
+ * Report discrepancies found during sequence synchronization between
+ * the publisher and subscriber. Emits warnings for:
+ * a) insufficient privileges
+ * b) mismatched definitions or concurrent rename
+ * c) missing sequences on the subscriber
+ * Then raises an ERROR to indicate synchronization failure.
+ */
+static void
+report_sequence_errors(List *insuffperm_seqs_idx, List *mismatched_seqs_idx,
+					   List *missing_seqs_idx)
+{
+	StringInfo	seqstr = makeStringInfo();
+
+	if (insuffperm_seqs_idx)
+	{
+		get_sequences_string(insuffperm_seqs_idx, seqstr);
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_plural("insufficient privileges on sequence (%s)",
+							  "insufficient privileges on sequences (%s)",
+							  list_length(insuffperm_seqs_idx),
+							  seqstr->data));
+	}
+
+	if (mismatched_seqs_idx)
+	{
+		get_sequences_string(mismatched_seqs_idx, seqstr);
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_plural("mismatched or renamed sequence on subscriber (%s)",
+							  "mismatched or renamed sequences on subscriber (%s)",
+							  list_length(mismatched_seqs_idx),
+							  seqstr->data));
+	}
+
+	if (missing_seqs_idx)
+	{
+		get_sequences_string(missing_seqs_idx, seqstr);
+		ereport(WARNING,
+				errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				errmsg_plural("missing sequence on publisher (%s)",
+							  "missing sequences on publisher (%s)",
+							  list_length(missing_seqs_idx),
+							  seqstr->data));
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("logical replication sequence synchronization failed for subscription \"%s\"",
+				   MySubscription->name));
+}
+
+/*
+ * get_and_validate_seq_info
+ *
+ * Extracts remote sequence information from the tuple slot received from the
+ * publisher, and validates it against the corresponding local sequence
+ * definition.
+ */
+static CopySeqResult
+get_and_validate_seq_info(TupleTableSlot *slot, Relation *sequence_rel,
+						  LogicalRepSequenceInfo **seqinfo, int *seqidx)
+{
+	bool		isnull;
+	int			col = 0;
+	Oid			remote_typid;
+	int64		remote_start;
+	int64		remote_increment;
+	int64		remote_min;
+	int64		remote_max;
+	bool		remote_cycle;
+	CopySeqResult result = COPYSEQ_SUCCESS;
+	HeapTuple	tup;
+	Form_pg_sequence local_seq;
+	LogicalRepSequenceInfo *seqinfo_local;
+
+	*seqidx = DatumGetInt32(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Identify the corresponding local sequence for the given index. */
+	*seqinfo = seqinfo_local =
+		(LogicalRepSequenceInfo *) list_nth(seqinfos, *seqidx);
+
+	seqinfo_local->last_value = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqinfo_local->is_called = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	seqinfo_local->page_lsn = DatumGetLSN(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_typid = DatumGetObjectId(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_start = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_increment = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_min = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_max = DatumGetInt64(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	remote_cycle = DatumGetBool(slot_getattr(slot, ++col, &isnull));
+	Assert(!isnull);
+
+	/* Sanity check */
+	Assert(col == REMOTE_SEQ_COL_COUNT);
+
+	seqinfo_local->found_on_pub = true;
+
+	*sequence_rel = try_table_open(seqinfo_local->localrelid, RowExclusiveLock);
+
+	/* Sequence was concurrently dropped? */
+	if (!*sequence_rel)
+		return COPYSEQ_SKIPPED;
+
+	tup = SearchSysCache1(SEQRELID, ObjectIdGetDatum(seqinfo_local->localrelid));
+
+	/* Sequence was concurrently dropped? */
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for sequence %u",
+			 seqinfo_local->localrelid);
+
+	local_seq = (Form_pg_sequence) GETSTRUCT(tup);
+
+	/* Sequence parameters for remote/local are the same? */
+	if (local_seq->seqtypid != remote_typid ||
+		local_seq->seqstart != remote_start ||
+		local_seq->seqincrement != remote_increment ||
+		local_seq->seqmin != remote_min ||
+		local_seq->seqmax != remote_max ||
+		local_seq->seqcycle != remote_cycle)
+		result = COPYSEQ_MISMATCH;
+
+	/* Sequence was concurrently renamed? */
+	if (strcmp(seqinfo_local->nspname,
+			   get_namespace_name(RelationGetNamespace(*sequence_rel))) ||
+		strcmp(seqinfo_local->seqname, RelationGetRelationName(*sequence_rel)))
+		result = COPYSEQ_MISMATCH;
+
+	ReleaseSysCache(tup);
+	return result;
+}
+
+/*
+ * Apply remote sequence state to local sequence and mark it as
+ * synchronized (READY).
+ */
+static CopySeqResult
+copy_sequence(LogicalRepSequenceInfo *seqinfo, Oid seqowner)
+{
+	UserContext ucxt;
+	AclResult	aclresult;
+	bool		run_as_owner = MySubscription->runasowner;
+	Oid			seqoid = seqinfo->localrelid;
+
+	/*
+	 * If the user did not opt to run as the owner of the subscription
+	 * ('run_as_owner'), then copy the sequence as the owner of the sequence.
+	 */
+	if (!run_as_owner)
+		SwitchToUntrustedUser(seqowner, &ucxt);
+
+	aclresult = pg_class_aclcheck(seqoid, GetUserId(), ACL_UPDATE);
+
+	if (aclresult != ACLCHECK_OK)
+	{
+		if (!run_as_owner)
+			RestoreUserContext(&ucxt);
+
+		return COPYSEQ_INSUFFICIENT_PERM;
+	}
+
+	SetSequence(seqoid, seqinfo->last_value, seqinfo->is_called);
+
+	if (!run_as_owner)
+		RestoreUserContext(&ucxt);
+
+	/*
+	 * Record the remote sequence's LSN in pg_subscription_rel and mark the
+	 * sequence as READY.
+	 */
+	UpdateSubscriptionRelState(MySubscription->oid, seqoid, SUBREL_STATE_READY,
+							   seqinfo->page_lsn, false);
+
+	return COPYSEQ_SUCCESS;
+}
+
+/*
+ * Copy existing data of sequences from the publisher.
+ */
+static void
+copy_sequences(WalReceiverConn *conn)
+{
+	int			cur_batch_base_index = 0;
+	int			n_seqinfos = list_length(seqinfos);
+	List	   *mismatched_seqs_idx = NIL;
+	List	   *missing_seqs_idx = NIL;
+	List	   *insuffperm_seqs_idx = NIL;
+	StringInfo	seqstr = makeStringInfo();
+	StringInfo	cmd = makeStringInfo();
+	MemoryContext oldctx;
+
+#define MAX_SEQUENCES_SYNC_PER_BATCH 100
+
+	ereport(LOG,
+			errmsg("logical replication sequence synchronization for subscription \"%s\" - total unsynchronized: %d",
+				   MySubscription->name, n_seqinfos));
+
+	while (cur_batch_base_index < n_seqinfos)
+	{
+		Oid			seqRow[REMOTE_SEQ_COL_COUNT] = {INT8OID, INT8OID,
+		BOOLOID, LSNOID, OIDOID, INT8OID, INT8OID, INT8OID, INT8OID, BOOLOID};
+		int			batch_size = 0;
+		int			batch_succeeded_count = 0;
+		int			batch_mismatched_count = 0;
+		int			batch_skipped_count = 0;
+		int			batch_insuffperm_count = 0;
+		int			batch_missing_count;
+		Relation	sequence_rel;
+
+		WalRcvExecResult *res;
+		TupleTableSlot *slot;
+
+		StartTransactionCommand();
+
+		for (int idx = cur_batch_base_index; idx < n_seqinfos; idx++)
+		{
+			LogicalRepSequenceInfo *seqinfo =
+				(LogicalRepSequenceInfo *) list_nth(seqinfos, idx);
+
+			if (seqstr->len > 0)
+				appendStringInfoString(seqstr, ", ");
+
+			appendStringInfo(seqstr, "(\'%s\', \'%s\', %d)",
+							 seqinfo->nspname, seqinfo->seqname, idx);
+
+			if (++batch_size == MAX_SEQUENCES_SYNC_PER_BATCH)
+				break;
+		}
+
+		/*
+		 * We deliberately avoid acquiring a local lock on the sequence before
+		 * querying the publisher to prevent potential distributed deadlocks
+		 * in bi-directional replication setups.
+		 *
+		 * Example scenario:
+		 *
+		 * - On each node, a background worker acquires a lock on a sequence
+		 * as part of a sync operation.
+		 *
+		 * - Concurrently, a user transaction attempts to alter the same
+		 * sequence, waiting on the background worker's lock.
+		 *
+		 * - Meanwhile, a query from the other node tries to access metadata
+		 * that depends on the completion of the alter operation.
+		 *
+		 * - This creates a circular wait across nodes:
+		 *
+		 * Node-1: Query -> waits on Alter -> waits on Sync Worker
+		 *
+		 * Node-2: Query -> waits on Alter -> waits on Sync Worker
+		 *
+		 * Since each node only sees part of the wait graph, the deadlock may
+		 * go undetected, leading to indefinite blocking.
+		 *
+		 * Note: Each entry in VALUES includes an index 'seqidx' that
+		 * represents the sequence's position in the local 'seqinfos' list.
+		 * This index is propagated to the query results and later used to
+		 * directly map the fetched publisher sequence rows back to their
+		 * corresponding local entries without relying on result order or name
+		 * matching.
+		 */
+		appendStringInfo(cmd,
+						 "SELECT s.seqidx, ps.*, seq.seqtypid,\n"
+						 "       seq.seqstart, seq.seqincrement, seq.seqmin,\n"
+						 "       seq.seqmax, seq.seqcycle\n"
+						 "FROM ( VALUES %s ) AS s (schname, seqname, seqidx)\n"
+						 "JOIN pg_namespace n ON n.nspname = s.schname\n"
+						 "JOIN pg_class c ON c.relnamespace = n.oid AND c.relname = s.seqname\n"
+						 "JOIN pg_sequence seq ON seq.seqrelid = c.oid\n"
+						 "JOIN LATERAL pg_get_sequence_data(seq.seqrelid) AS ps ON true\n",
+						 seqstr->data);
+
+		res = walrcv_exec(conn, cmd->data, lengthof(seqRow), seqRow);
+		if (res->status != WALRCV_OK_TUPLES)
+			ereport(ERROR,
+					errcode(ERRCODE_CONNECTION_FAILURE),
+					errmsg("could not fetch sequence information from the publisher: %s",
+						   res->err));
+
+		slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+		while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+		{
+			CopySeqResult sync_status;
+			LogicalRepSequenceInfo *seqinfo;
+			int			seqidx;
+
+			CHECK_FOR_INTERRUPTS();
+
+			if (ConfigReloadPending)
+			{
+				ConfigReloadPending = false;
+				ProcessConfigFile(PGC_SIGHUP);
+			}
+
+			sync_status = get_and_validate_seq_info(slot, &sequence_rel,
+													&seqinfo, &seqidx);
+			if (sync_status == COPYSEQ_SUCCESS)
+				sync_status = copy_sequence(seqinfo,
+											sequence_rel->rd_rel->relowner);
+
+			switch (sync_status)
+			{
+				case COPYSEQ_MISMATCH:
+
+					/*
+					 * Remember mismatched sequences in a long-lived memory
+					 * context, since these will be used after the transaction
+					 * commits.
+					 */
+					oldctx = MemoryContextSwitchTo(ApplyContext);
+					mismatched_seqs_idx = lappend_int(mismatched_seqs_idx,
+													  seqidx);
+					MemoryContextSwitchTo(oldctx);
+					batch_mismatched_count++;
+					break;
+				case COPYSEQ_INSUFFICIENT_PERM:
+
+					/*
+					 * Remember the sequences with insufficient privileges in
+					 * a long-lived memory context, since these will be used
+					 * after the transaction commits.
+					 */
+					oldctx = MemoryContextSwitchTo(ApplyContext);
+					insuffperm_seqs_idx = lappend_int(insuffperm_seqs_idx,
+													  seqidx);
+					MemoryContextSwitchTo(oldctx);
+					batch_insuffperm_count++;
+					break;
+				case COPYSEQ_SKIPPED:
+					ereport(LOG,
+							errmsg("skip synchronization of sequence \"%s.%s\" because it has been dropped concurrently",
+								   seqinfo->nspname,
+								   seqinfo->seqname));
+					batch_skipped_count++;
+					break;
+				case COPYSEQ_SUCCESS:
+					elog(DEBUG1,
+						 "logical replication synchronization for subscription \"%s\", sequence \"%s.%s\" has finished",
+						 MySubscription->name, seqinfo->nspname,
+						 seqinfo->seqname);
+					batch_succeeded_count++;
+					break;
+			}
+
+			if (sequence_rel)
+				table_close(sequence_rel, NoLock);
+		}
+
+		ExecDropSingleTupleTableSlot(slot);
+		walrcv_clear_result(res);
+		resetStringInfo(seqstr);
+		resetStringInfo(cmd);
+
+		batch_missing_count = batch_size - (batch_succeeded_count +
+											batch_mismatched_count +
+											batch_insuffperm_count +
+											batch_skipped_count);
+
+		elog(DEBUG1,
+			 "logical replication sequence synchronization for subscription \"%s\" - batch #%d = %d attempted, %d succeeded, %d skipped, %d mismatched, %d insufficient permission, %d missing from publisher",
+			 MySubscription->name,
+			 (cur_batch_base_index / MAX_SEQUENCES_SYNC_PER_BATCH) + 1,
+			 batch_size, batch_succeeded_count, batch_skipped_count,
+			 batch_mismatched_count, batch_insuffperm_count,
+			 batch_missing_count);
+
+		/* Commit this batch, and prepare for next batch */
+		CommitTransactionCommand();
+
+		if (batch_missing_count)
+		{
+			for (int idx = cur_batch_base_index; idx < cur_batch_base_index + batch_size; idx++)
+			{
+				LogicalRepSequenceInfo *seqinfo =
+					(LogicalRepSequenceInfo *) list_nth(seqinfos, idx);
+
+				/* If the sequence was not found on publisher, record it */
+				if (!seqinfo->found_on_pub)
+					missing_seqs_idx = lappend_int(missing_seqs_idx, idx);
+			}
+		}
+
+		/*
+		 * cur_batch_base_index is not incremented sequentially because some
+		 * sequences may be missing, and the number of fetched rows may not
+		 * match the batch size.
+		 */
+		cur_batch_base_index += batch_size;
+	}
+
+	/* Report permission issues, mismatches, or missing sequences */
+	if (insuffperm_seqs_idx || mismatched_seqs_idx || missing_seqs_idx)
+		report_sequence_errors(insuffperm_seqs_idx, mismatched_seqs_idx,
+							   missing_seqs_idx);
+}
+
+/*
+ * Identifies sequences that require synchronization and initiates the
+ * synchronization process.
+ */
+static void
+LogicalRepSyncSequences(void)
+{
+	char	   *err;
+	bool		must_use_password;
+	Relation	rel;
+	HeapTuple	tup;
+	ScanKeyData skey[2];
+	SysScanDesc scan;
+	Oid			subid = MyLogicalRepWorker->subid;
+	StringInfoData app_name;
+
+	StartTransactionCommand();
+
+	rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+	ScanKeyInit(&skey[0],
+				Anum_pg_subscription_rel_srsubid,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(subid));
+
+	ScanKeyInit(&skey[1],
+				Anum_pg_subscription_rel_srsubstate,
+				BTEqualStrategyNumber, F_CHAREQ,
+				CharGetDatum(SUBREL_STATE_INIT));
+
+	scan = systable_beginscan(rel, InvalidOid, false,
+							  NULL, 2, skey);
+	while (HeapTupleIsValid(tup = systable_getnext(scan)))
+	{
+		Form_pg_subscription_rel subrel;
+		LogicalRepSequenceInfo *seq;
+		Relation	sequence_rel;
+		MemoryContext oldctx;
+
+		CHECK_FOR_INTERRUPTS();
+
+		subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
+
+		sequence_rel = try_table_open(subrel->srrelid, RowExclusiveLock);
+
+		/* Skip if sequence was dropped concurrently */
+		if (!sequence_rel)
+			continue;
+
+		/* Skip if the relation is not a sequence */
+		if (sequence_rel->rd_rel->relkind != RELKIND_SEQUENCE)
+			continue;
+
+		/*
+		 * Worker needs to process sequences across transaction boundary, so
+		 * allocate them under long-lived context.
+		 */
+		oldctx = MemoryContextSwitchTo(ApplyContext);
+
+		seq = palloc0_object(LogicalRepSequenceInfo);
+		seq->localrelid = subrel->srrelid;
+		seq->nspname = get_namespace_name(RelationGetNamespace(sequence_rel));
+		seq->seqname = pstrdup(RelationGetRelationName(sequence_rel));
+		seqinfos = lappend(seqinfos, seq);
+
+		MemoryContextSwitchTo(oldctx);
+
+		table_close(sequence_rel, NoLock);
+	}
+
+	/* Cleanup */
+	systable_endscan(scan);
+	table_close(rel, AccessShareLock);
+
+	CommitTransactionCommand();
+
+	/*
+	 * Exit early if no catalog entries found, likely due to concurrent drops.
+	 */
+	if (!seqinfos)
+		return;
+
+	/* Is the use of a password mandatory? */
+	must_use_password = MySubscription->passwordrequired &&
+		!MySubscription->ownersuperuser;
+
+	initStringInfo(&app_name);
+	appendStringInfo(&app_name, "pg_%u_sequence_sync_" UINT64_FORMAT,
+					 MySubscription->oid, GetSystemIdentifier());
+
+	/*
+	 * Establish the connection to the publisher for sequence synchronization.
+	 */
+	LogRepWorkerWalRcvConn =
+		walrcv_connect(MySubscription->conninfo, true, true,
+					   must_use_password,
+					   app_name.data, &err);
+	if (LogRepWorkerWalRcvConn == NULL)
+		ereport(ERROR,
+				errcode(ERRCODE_CONNECTION_FAILURE),
+				errmsg("sequencesync worker for subscription \"%s\" could not connect to the publisher: %s",
+					   MySubscription->name, err));
+
+	pfree(app_name.data);
+
+	copy_sequences(LogRepWorkerWalRcvConn);
+}
+
+/*
+ * Execute the initial sync with error handling. Disable the subscription,
+ * if required.
+ *
+ * Note that we don't handle FATAL errors which are probably because of system
+ * resource error and are not repeatable.
+ */
+static void
+start_sequence_sync()
+{
+	Assert(am_sequencesync_worker());
+
+	PG_TRY();
+	{
+		/* Call initial sync. */
+		LogicalRepSyncSequences();
+	}
+	PG_CATCH();
+	{
+		if (MySubscription->disableonerr)
+			DisableSubscriptionAndExit();
+		else
+		{
+			/*
+			 * Report the worker failed during sequence synchronization. Abort
+			 * the current transaction so that the stats message is sent in an
+			 * idle state.
+			 */
+			AbortOutOfAnyTransaction();
+			PG_RE_THROW();
+		}
+	}
+	PG_END_TRY();
+}
+
+/* Logical Replication sequencesync worker entry point */
+void
+SequenceSyncWorkerMain(Datum main_arg)
+{
+	int			worker_slot = DatumGetInt32(main_arg);
+
+	SetupApplyOrSyncWorker(worker_slot);
+
+	start_sequence_sync();
+
+	FinishSyncWorker();
+}
diff --git a/src/backend/replication/logical/syncutils.c b/src/backend/replication/logical/syncutils.c
index ae8c9385916..19f98941053 100644
--- a/src/backend/replication/logical/syncutils.c
+++ b/src/backend/replication/logical/syncutils.c
@@ -16,6 +16,7 @@
 
 #include "catalog/pg_subscription_rel.h"
 #include "pgstat.h"
+#include "replication/logicallauncher.h"
 #include "replication/worker_internal.h"
 #include "storage/ipc.h"
 #include "utils/lsyscache.h"
@@ -48,6 +49,8 @@ static SyncingRelationsState relation_states_validity = SYNC_RELATIONS_STATE_NEE
 pg_noreturn void
 FinishSyncWorker(void)
 {
+	Assert(am_sequencesync_worker() || am_tablesync_worker());
+
 	/*
 	 * Commit any outstanding transaction. This is the usual case, unless
 	 * there was nothing to do for the table.
@@ -61,16 +64,32 @@ FinishSyncWorker(void)
 	/* And flush all writes. */
 	XLogFlush(GetXLogWriteRecPtr());
 
-	StartTransactionCommand();
-	ereport(LOG,
-			(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
-					MySubscription->name,
-					get_rel_name(MyLogicalRepWorker->relid))));
-	CommitTransactionCommand();
+	if (am_sequencesync_worker())
+	{
+		ereport(LOG,
+				errmsg("logical replication sequence synchronization worker for subscription \"%s\" has finished",
+					   MySubscription->name));
+
+		/*
+		 * Find the leader apply worker and reset last_seqsync_start_time.
+		 * This ensures that the apply worker can restart the sequence sync
+		 * worker promptly whenever required.
+		 */
+		logicalrep_reset_seqsync_start_time();
+	}
+	else
+	{
+		StartTransactionCommand();
+		ereport(LOG,
+				errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has finished",
+					   MySubscription->name,
+					   get_rel_name(MyLogicalRepWorker->relid)));
+		CommitTransactionCommand();
 
-	/* Find the leader apply worker and signal it. */
-	logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
-							 InvalidOid);
+		/* Find the leader apply worker and signal it. */
+		logicalrep_worker_wakeup(WORKERTYPE_APPLY, MyLogicalRepWorker->subid,
+								 InvalidOid);
+	}
 
 	/* Stop gracefully */
 	proc_exit(0);
@@ -86,7 +105,52 @@ InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue)
 }
 
 /*
- * Process possible state change(s) of relations that are being synchronized.
+ * Attempt to launch a sync worker for one or more sequences or a table, if
+ * a worker slot is available and the retry interval has elapsed.
+ *
+ * wtype: sync worker type.
+ * nsyncworkers: Number of currently running sync workers for the subscription.
+ * relid:  InvalidOid for sequencesync worker, actual relid for tablesync
+ * worker.
+ * last_start_time: Pointer to the last start time of the worker.
+ */
+void
+launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers, Oid relid,
+				   TimestampTz *last_start_time)
+{
+	TimestampTz now;
+
+	Assert((wtype == WORKERTYPE_TABLESYNC && OidIsValid(relid)) ||
+		   (wtype == WORKERTYPE_SEQUENCESYNC && !OidIsValid(relid)));
+
+	/* If there is a free sync worker slot, start a new sync worker */
+	if (nsyncworkers >= max_sync_workers_per_subscription)
+		return;
+
+	now = GetCurrentTimestamp();
+
+	if (!(*last_start_time) ||
+		TimestampDifferenceExceeds(*last_start_time, now,
+								   wal_retrieve_retry_interval))
+	{
+		/*
+		 * Set the last_start_time even if we fail to start the worker, so
+		 * that we won't retry until wal_retrieve_retry_interval has elapsed.
+		 */
+		*last_start_time = now;
+		(void) logicalrep_worker_launch(wtype,
+										MyLogicalRepWorker->dbid,
+										MySubscription->oid,
+										MySubscription->name,
+										MyLogicalRepWorker->userid,
+										relid, DSM_HANDLE_INVALID, false);
+	}
+}
+
+/*
+ * Process possible state change(s) of relations that are being synchronized
+ * and start new tablesync workers for the newly added tables. Also, start a
+ * new sequencesync worker for the newly added sequences.
  */
 void
 ProcessSyncingRelations(XLogRecPtr current_lsn)
@@ -108,6 +172,12 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 
 		case WORKERTYPE_APPLY:
 			ProcessSyncingTablesForApply(current_lsn);
+			ProcessSequencesForSync();
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "sequence synchronization worker is not expected to process relations");
 			break;
 
 		case WORKERTYPE_UNKNOWN:
@@ -117,17 +187,29 @@ ProcessSyncingRelations(XLogRecPtr current_lsn)
 }
 
 /*
- * Common code to fetch the up-to-date sync state info into the static lists.
+ * Common code to fetch the up-to-date sync state info for tables and sequences.
  *
- * Returns true if subscription has 1 or more tables, else false.
+ * The pg_subscription_rel catalog is shared by tables and sequences. Changes
+ * to either sequences or tables can affect the validity of relation states, so
+ * we identify non-READY tables and non-READY sequences together to ensure
+ * consistency.
  *
- * Note: If this function started the transaction (indicated by the parameter)
- * then it is the caller's responsibility to commit it.
+ * has_pending_subtables: true if the subscription has one or more tables that
+ * are not in READY state, otherwise false.
+ * has_pending_subsequences: true if the subscription has one or more sequences
+ * that are not in READY state, otherwise false.
  */
-bool
-FetchRelationStates(bool *started_tx)
+void
+FetchRelationStates(bool *has_pending_subtables,
+					bool *has_pending_subsequences,
+					bool *started_tx)
 {
+	/*
+	 * has_subtables and has_subsequences_non_ready are declared as static,
+	 * since the same value can be used until the system table is invalidated.
+	 */
 	static bool has_subtables = false;
+	static bool has_subsequences_non_ready = false;
 
 	*started_tx = false;
 
@@ -135,10 +217,10 @@ FetchRelationStates(bool *started_tx)
 	{
 		MemoryContext oldctx;
 		List	   *rstates;
-		ListCell   *lc;
 		SubscriptionRelState *rstate;
 
 		relation_states_validity = SYNC_RELATIONS_STATE_REBUILD_STARTED;
+		has_subsequences_non_ready = false;
 
 		/* Clean the old lists. */
 		list_free_deep(table_states_not_ready);
@@ -150,17 +232,23 @@ FetchRelationStates(bool *started_tx)
 			*started_tx = true;
 		}
 
-		/* Fetch tables that are in non-ready state. */
-		rstates = GetSubscriptionRelations(MySubscription->oid, true, false,
+		/* Fetch tables and sequences that are in non-READY state. */
+		rstates = GetSubscriptionRelations(MySubscription->oid, true, true,
 										   true);
 
 		/* Allocate the tracking info in a permanent memory context. */
 		oldctx = MemoryContextSwitchTo(CacheMemoryContext);
-		foreach(lc, rstates)
+		foreach_ptr(SubscriptionRelState, subrel, rstates)
 		{
-			rstate = palloc(sizeof(SubscriptionRelState));
-			memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
-			table_states_not_ready = lappend(table_states_not_ready, rstate);
+			if (get_rel_relkind(subrel->relid) == RELKIND_SEQUENCE)
+				has_subsequences_non_ready = true;
+			else
+			{
+				rstate = palloc(sizeof(SubscriptionRelState));
+				memcpy(rstate, subrel, sizeof(SubscriptionRelState));
+				table_states_not_ready = lappend(table_states_not_ready,
+												 rstate);
+			}
 		}
 		MemoryContextSwitchTo(oldctx);
 
@@ -185,5 +273,9 @@ FetchRelationStates(bool *started_tx)
 			relation_states_validity = SYNC_RELATIONS_STATE_VALID;
 	}
 
-	return has_subtables;
+	if (has_pending_subtables)
+		*has_pending_subtables = has_subtables;
+
+	if (has_pending_subsequences)
+		*has_pending_subsequences = has_subsequences_non_ready;
 }
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 58c98488d7b..e5a2856fd17 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -374,14 +374,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	};
 	static HTAB *last_start_times = NULL;
 	ListCell   *lc;
-	bool		started_tx = false;
+	bool		started_tx;
 	bool		should_exit = false;
 	Relation	rel = NULL;
 
 	Assert(!IsTransactionState());
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	FetchRelationStates(&started_tx);
+	FetchRelationStates(NULL, NULL, &started_tx);
 
 	/*
 	 * Prepare a hash table for tracking last start times of workers, to avoid
@@ -415,6 +415,14 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 	{
 		SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
 
+		if (!started_tx)
+		{
+			StartTransactionCommand();
+			started_tx = true;
+		}
+
+		Assert(get_rel_relkind(rstate->relid) != RELKIND_SEQUENCE);
+
 		if (rstate->state == SUBREL_STATE_SYNCDONE)
 		{
 			/*
@@ -428,11 +436,6 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 
 				rstate->state = SUBREL_STATE_READY;
 				rstate->lsn = current_lsn;
-				if (!started_tx)
-				{
-					StartTransactionCommand();
-					started_tx = true;
-				}
 
 				/*
 				 * Remove the tablesync origin tracking if exists.
@@ -552,43 +555,19 @@ ProcessSyncingTablesForApply(XLogRecPtr current_lsn)
 				 */
 				int			nsyncworkers =
 					logicalrep_sync_worker_count(MyLogicalRepWorker->subid);
+				struct tablesync_start_time_mapping *hentry;
+				bool		found;
 
 				/* Now safe to release the LWLock */
 				LWLockRelease(LogicalRepWorkerLock);
 
-				/*
-				 * If there are free sync worker slot(s), start a new sync
-				 * worker for the table.
-				 */
-				if (nsyncworkers < max_sync_workers_per_subscription)
-				{
-					TimestampTz now = GetCurrentTimestamp();
-					struct tablesync_start_time_mapping *hentry;
-					bool		found;
+				hentry = hash_search(last_start_times, &rstate->relid,
+									 HASH_ENTER, &found);
+				if (!found)
+					hentry->last_start_time = 0;
 
-					hentry = hash_search(last_start_times, &rstate->relid,
-										 HASH_ENTER, &found);
-
-					if (!found ||
-						TimestampDifferenceExceeds(hentry->last_start_time, now,
-												   wal_retrieve_retry_interval))
-					{
-						/*
-						 * Set the last_start_time even if we fail to start
-						 * the worker, so that we won't retry until
-						 * wal_retrieve_retry_interval has elapsed.
-						 */
-						hentry->last_start_time = now;
-						(void) logicalrep_worker_launch(WORKERTYPE_TABLESYNC,
-														MyLogicalRepWorker->dbid,
-														MySubscription->oid,
-														MySubscription->name,
-														MyLogicalRepWorker->userid,
-														rstate->relid,
-														DSM_HANDLE_INVALID,
-														false);
-					}
-				}
+				launch_sync_worker(WORKERTYPE_TABLESYNC, nsyncworkers,
+								   rstate->relid, &hentry->last_start_time);
 			}
 		}
 	}
@@ -1432,8 +1411,8 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
 	}
 
 	/*
-	 * Make sure that the copy command runs as the table owner, unless the
-	 * user has opted out of that behaviour.
+	 * If the user did not opt to run as the owner of the subscription
+	 * ('run_as_owner'), then copy the table as the owner of the table.
 	 */
 	run_as_owner = MySubscription->runasowner;
 	if (!run_as_owner)
@@ -1596,7 +1575,7 @@ run_tablesync_worker()
 
 /* Logical Replication Tablesync worker entry point */
 void
-TablesyncWorkerMain(Datum main_arg)
+TableSyncWorkerMain(Datum main_arg)
 {
 	int			worker_slot = DatumGetInt32(main_arg);
 
@@ -1618,11 +1597,11 @@ TablesyncWorkerMain(Datum main_arg)
 bool
 AllTablesyncsReady(void)
 {
-	bool		started_tx = false;
-	bool		has_subrels = false;
+	bool		started_tx;
+	bool		has_tables;
 
 	/* We need up-to-date sync state info for subscription tables here. */
-	has_subrels = FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_tables, NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1634,7 +1613,7 @@ AllTablesyncsReady(void)
 	 * Return false when there are no tables in subscription or not all tables
 	 * are in ready state; true otherwise.
 	 */
-	return has_subrels && (table_states_not_ready == NIL);
+	return has_tables && (table_states_not_ready == NIL);
 }
 
 /*
@@ -1649,10 +1628,10 @@ bool
 HasSubscriptionTablesCached(void)
 {
 	bool		started_tx;
-	bool		has_subrels;
+	bool		has_tables;
 
 	/* We need up-to-date subscription tables info here */
-	has_subrels = FetchRelationStates(&started_tx);
+	FetchRelationStates(&has_tables, NULL, &started_tx);
 
 	if (started_tx)
 	{
@@ -1660,7 +1639,7 @@ HasSubscriptionTablesCached(void)
 		pgstat_report_stat(true);
 	}
 
-	return has_subrels;
+	return has_tables;
 }
 
 /*
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 7edd1c9cf06..cbf311efd57 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -702,6 +702,11 @@ should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
 					(rel->state == SUBREL_STATE_SYNCDONE &&
 					 rel->statelsn <= remote_final_lsn));
 
+		case WORKERTYPE_SEQUENCESYNC:
+			/* Should never happen. */
+			elog(ERROR, "sequence synchronization worker is not expected to apply changes");
+			break;
+
 		case WORKERTYPE_UNKNOWN:
 			/* Should never happen. */
 			elog(ERROR, "Unknown worker type");
@@ -1243,7 +1248,10 @@ apply_handle_commit(StringInfo s)
 
 	apply_handle_commit_internal(&commit_data);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1365,7 +1373,10 @@ apply_handle_prepare(StringInfo s)
 
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -1421,7 +1432,10 @@ apply_handle_commit_prepared(StringInfo s)
 	store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	clear_subscription_skip_lsn(prepare_data.end_lsn);
@@ -1487,7 +1501,10 @@ apply_handle_rollback_prepared(StringInfo s)
 	store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
 	in_remote_transaction = false;
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(rollback_data.rollback_end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -1622,7 +1639,10 @@ apply_handle_stream_prepare(StringInfo s)
 
 	pgstat_report_stat(false);
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(prepare_data.end_lsn);
 
 	/*
@@ -2465,7 +2485,10 @@ apply_handle_stream_commit(StringInfo s)
 			break;
 	}
 
-	/* Process any tables that are being synchronized in parallel. */
+	/*
+	 * Process any tables that are being synchronized in parallel, as well as
+	 * any newly added tables or sequences.
+	 */
 	ProcessSyncingRelations(commit_data.end_lsn);
 
 	pgstat_report_activity(STATE_IDLE, NULL);
@@ -4137,7 +4160,10 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
 			AcceptInvalidationMessages();
 			maybe_reread_subscription();
 
-			/* Process any table synchronization changes. */
+			/*
+			 * Process any relations that are being synchronized in parallel
+			 * and any newly added tables or sequences.
+			 */
 			ProcessSyncingRelations(last_received);
 		}
 
@@ -5700,8 +5726,8 @@ run_apply_worker()
 }
 
 /*
- * Common initialization for leader apply worker, parallel apply worker and
- * tablesync worker.
+ * Common initialization for leader apply worker, parallel apply worker,
+ * tablesync worker and sequencesync worker.
  *
  * Initialize the database connection, in-memory subscription and necessary
  * config options.
@@ -5812,6 +5838,10 @@ InitializeLogRepWorker(void)
 				(errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
 						MySubscription->name,
 						get_rel_name(MyLogicalRepWorker->relid))));
+	else if (am_sequencesync_worker())
+		ereport(LOG,
+				(errmsg("logical replication sequence synchronization worker for subscription \"%s\" has started",
+						MySubscription->name)));
 	else
 		ereport(LOG,
 				(errmsg("logical replication apply worker for subscription \"%s\" has started",
@@ -5831,14 +5861,16 @@ replorigin_reset(int code, Datum arg)
 	replorigin_session_origin_timestamp = 0;
 }
 
-/* Common function to setup the leader apply or tablesync worker. */
+/*
+ * Common function to setup the leader apply, tablesync and sequencesync worker.
+ */
 void
 SetupApplyOrSyncWorker(int worker_slot)
 {
 	/* Attach to slot */
 	logicalrep_worker_attach(worker_slot);
 
-	Assert(am_tablesync_worker() || am_leader_apply_worker());
+	Assert(am_tablesync_worker() || am_sequencesync_worker() || am_leader_apply_worker());
 
 	/* Setup signal handling */
 	pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -5921,9 +5953,15 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	/* Report the worker failed during either table synchronization or apply */
-	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-									 !am_tablesync_worker());
+	if (am_leader_apply_worker() || am_tablesync_worker())
+	{
+		/*
+		 * Report the worker failed during either table synchronization or
+		 * apply.
+		 */
+		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+										 !am_tablesync_worker());
+	}
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index d6fc8333850..0b49b98da99 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -1924,7 +1924,7 @@
 },
 
 { name => 'max_sync_workers_per_subscription', type => 'int', context => 'PGC_SIGHUP', group => 'REPLICATION_SUBSCRIBERS',
-  short_desc => 'Maximum number of table synchronization workers per subscription.',
+  short_desc => 'Maximum number of workers per subscription for synchronizing tables and sequences.',
   variable => 'max_sync_workers_per_subscription',
   boot_val => '2',
   min => '0',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 9121a382f76..34b7fddb0e7 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3433,7 +3433,7 @@
   proname => 'pg_sequence_last_value', provolatile => 'v', proparallel => 'u',
   prorettype => 'int8', proargtypes => 'regclass',
   prosrc => 'pg_sequence_last_value' },
-{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump',
+{ oid => '6427', descr => 'return sequence tuple, for use by pg_dump and sequence synchronization',
   proname => 'pg_get_sequence_data', provolatile => 'v', proparallel => 'u',
   prorettype => 'record', proargtypes => 'regclass',
   proallargtypes => '{regclass,int8,bool,pg_lsn}', proargmodes => '{i,o,o,o}',
diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h
index 9f88498ecd3..9508e553b52 100644
--- a/src/include/catalog/pg_subscription_rel.h
+++ b/src/include/catalog/pg_subscription_rel.h
@@ -82,6 +82,29 @@ typedef struct SubscriptionRelState
 	char		state;
 } SubscriptionRelState;
 
+/*
+ * Holds local sequence identity and corresponding publisher values used during
+ * sequence synchronization.
+ */
+typedef struct LogicalRepSequenceInfo
+{
+	/* Sequence information retrieved from the local node */
+	char	   *seqname;
+	char	   *nspname;
+	Oid			localrelid;
+
+	/* Sequence information retrieved from the publisher node */
+	XLogRecPtr	page_lsn;
+	int64		last_value;
+	bool		is_called;
+
+	/*
+	 * True if the sequence identified by (nspname, seqname) exists on the
+	 * publisher.
+	 */
+	bool		found_on_pub;
+} LogicalRepSequenceInfo;
+
 extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,
 									XLogRecPtr sublsn, bool retain_lock);
 extern void UpdateSubscriptionRelState(Oid subid, Oid relid, char state,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9ac0b67683d..46b4d89dd6e 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
 extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
 extern void DeleteSequenceTuple(Oid relid);
 extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid relid, int64 next, bool is_called);
 extern void ResetSequenceCaches(void);
 
 extern void seq_redo(XLogReaderState *record);
diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h
index 88912606e4d..56fa79b648e 100644
--- a/src/include/replication/logicalworker.h
+++ b/src/include/replication/logicalworker.h
@@ -18,7 +18,8 @@ extern PGDLLIMPORT volatile sig_atomic_t ParallelApplyMessagePending;
 
 extern void ApplyWorkerMain(Datum main_arg);
 extern void ParallelApplyWorkerMain(Datum main_arg);
-extern void TablesyncWorkerMain(Datum main_arg);
+extern void TableSyncWorkerMain(Datum main_arg);
+extern void SequenceSyncWorkerMain(Datum main_arg);
 
 extern bool IsLogicalWorker(void);
 extern bool IsLogicalParallelApplyWorker(void);
diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h
index e23fa9a4514..f081619f151 100644
--- a/src/include/replication/worker_internal.h
+++ b/src/include/replication/worker_internal.h
@@ -30,6 +30,7 @@ typedef enum LogicalRepWorkerType
 {
 	WORKERTYPE_UNKNOWN = 0,
 	WORKERTYPE_TABLESYNC,
+	WORKERTYPE_SEQUENCESYNC,
 	WORKERTYPE_APPLY,
 	WORKERTYPE_PARALLEL_APPLY,
 } LogicalRepWorkerType;
@@ -106,6 +107,8 @@ typedef struct LogicalRepWorker
 	TimestampTz last_recv_time;
 	XLogRecPtr	reply_lsn;
 	TimestampTz reply_time;
+
+	TimestampTz last_seqsync_start_time;
 } LogicalRepWorker;
 
 /*
@@ -271,6 +274,7 @@ extern void logicalrep_worker_wakeup(LogicalRepWorkerType wtype, Oid subid,
 									 Oid relid);
 extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);
 
+extern void logicalrep_reset_seqsync_start_time(void);
 extern int	logicalrep_sync_worker_count(Oid subid);
 
 extern void ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
@@ -282,11 +286,15 @@ extern void UpdateTwoPhaseState(Oid suboid, char new_state);
 
 extern void ProcessSyncingTablesForSync(XLogRecPtr current_lsn);
 extern void ProcessSyncingTablesForApply(XLogRecPtr current_lsn);
+extern void ProcessSequencesForSync(void);
 
 pg_noreturn extern void FinishSyncWorker(void);
 extern void InvalidateSyncingRelStates(Datum arg, int cacheid, uint32 hashvalue);
+extern void launch_sync_worker(LogicalRepWorkerType wtype, int nsyncworkers,
+							   Oid relid, TimestampTz *last_start_time);
 extern void ProcessSyncingRelations(XLogRecPtr current_lsn);
-extern bool FetchRelationStates(bool *started_tx);
+extern void FetchRelationStates(bool *has_pending_subtables,
+								bool *has_pending_sequences, bool *started_tx);
 
 extern void stream_start_internal(TransactionId xid, bool first_segment);
 extern void stream_stop_internal(TransactionId xid);
@@ -353,13 +361,21 @@ extern void pa_xact_finish(ParallelApplyWorkerInfo *winfo,
 
 #define isParallelApplyWorker(worker) ((worker)->in_use && \
 									   (worker)->type == WORKERTYPE_PARALLEL_APPLY)
-#define isTablesyncWorker(worker) ((worker)->in_use && \
+#define isTableSyncWorker(worker) ((worker)->in_use && \
 								   (worker)->type == WORKERTYPE_TABLESYNC)
+#define isSequenceSyncWorker(worker) ((worker)->in_use && \
+									  (worker)->type == WORKERTYPE_SEQUENCESYNC)
 
 static inline bool
 am_tablesync_worker(void)
 {
-	return isTablesyncWorker(MyLogicalRepWorker);
+	return isTableSyncWorker(MyLogicalRepWorker);
+}
+
+static inline bool
+am_sequencesync_worker(void)
+{
+	return isSequenceSyncWorker(MyLogicalRepWorker);
 }
 
 static inline bool
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index 557fc91c017..ef40ca979b1 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -1,7 +1,7 @@
 
 # Copyright (c) 2025, PostgreSQL Global Development Group
 
-# This tests that sequences are registered to be synced to the subscriber
+# This tests that sequences are synced correctly to the subscriber
 use strict;
 use warnings;
 use PostgreSQL::Test::Cluster;
@@ -14,6 +14,7 @@ my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
 # Avoid checkpoint during the test, otherwise, extra values will be fetched for
 # the sequences which will cause the test to fail randomly.
 $node_publisher->init(allows_streaming => 'logical');
+$node_publisher->append_conf('postgresql.conf', 'checkpoint_timeout = 1h');
 $node_publisher->start;
 
 # Initialize subscriber node
@@ -28,7 +29,14 @@ my $ddl = qq(
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
-# Setup the same structure on the subscriber
+# Setup the same structure on the subscriber, plus some extra sequences that
+# we'll create on the publisher later
+$ddl = qq(
+	CREATE TABLE regress_seq_test (v BIGINT);
+	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE regress_s2;
+	CREATE SEQUENCE regress_s3;
+);
 $node_subscriber->safe_psql('postgres', $ddl);
 
 # Insert initial test data
@@ -46,10 +54,165 @@ $node_subscriber->safe_psql('postgres',
 	"CREATE SUBSCRIPTION regress_seq_sub CONNECTION '$publisher_connstr' PUBLICATION regress_seq_pub"
 );
 
-# Confirm sequences can be listed in pg_subscription_rel
-my $result = $node_subscriber->safe_psql('postgres',
-	"SELECT relname, srsubstate FROM pg_class, pg_subscription_rel WHERE oid = srrelid"
+# Wait for initial sync to finish
+my $synced_query =
+  "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the initial data on subscriber
+my $result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'initial test data replicated');
+
+##########
+## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
+# sequences of the publisher, but changes to existing sequences should
+# not be synced.
+##########
+
+# Create a new sequence 'regress_s2', and update existing sequence 'regress_s1'
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s2;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+));
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION;
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_publisher->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|31|t', 'Check sequence value in the publisher');
+
+# Check - existing sequence ('regress_s1') is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '100|0|t', 'REFRESH PUBLICATION will not sync existing sequence');
+
+# Check - newly published sequence ('regress_s2') is synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '100|0|t',
+	'REFRESH PUBLICATION will sync newly published sequence');
+
+##########
+# Test: REFRESH SEQUENCES and REFRESH PUBLICATION (copy_data = off)
+#
+# 1. ALTER SUBSCRIPTION ... REFRESH SEQUENCES should re-synchronize all
+#    existing sequences, but not synchronize newly added ones.
+# 2. ALTER SUBSCRIPTION ... REFRESH PUBLICATION with (copy_data = off) should
+#    also not update sequence values for newly added sequences.
+##########
+
+# Create a new sequence 'regress_s3', and update the existing sequence
+# 'regress_s2'.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s3;
+	INSERT INTO regress_seq_test SELECT nextval('regress_s3') FROM generate_series(1,100);
+
+	-- Existing sequence
+	INSERT INTO regress_seq_test SELECT nextval('regress_s2') FROM generate_series(1,100);
+));
+
+# 1. Do ALTER SUBSCRIPTION ... REFRESH SEQUENCES
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH SEQUENCES;
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - existing sequences ('regress_s1' and 'regress_s2') are synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s1;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s2;
+));
+is($result, '200|0|t', 'REFRESH SEQUENCES will sync existing sequences');
+
+# Check - newly published sequence ('regress_s3') is not synced
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH SEQUENCES will not sync newly published sequence');
+
+# 2. Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION with copy_data as false
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION WITH (copy_data = false);
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check - newly published sequence ('regress_s3') is not synced when
+# (copy_data = off).
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, log_cnt, is_called FROM regress_s3;
+));
+is($result, '1|0|f',
+	'REFRESH PUBLICATION will not sync newly published sequence with copy_data as off'
 );
-is($result, 'regress_s1|i', "Sequence can be in pg_subscription_rel catalog");
+
+##########
+# ALTER SUBSCRIPTION ... REFRESH PUBLICATION should report an error when:
+# a) sequence definitions differ between the publisher and subscriber, or
+# b) a sequence is missing on the publisher.
+##########
+
+# Create a new sequence 'regress_s4' whose START value is not the same in the
+# publisher and subscriber.
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 1 INCREMENT 2;
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE regress_s4 START 10 INCREMENT 2;
+));
+
+my $log_offset = -s $node_subscriber->logfile;
+
+# Do ALTER SUBSCRIPTION ... REFRESH PUBLICATION
+$node_subscriber->safe_psql('postgres',
+	"ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION");
+
+# Verify that an error is logged for parameter differences on sequence
+# ('regress_s4').
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? mismatched or renamed sequence on subscriber \("public.regress_s4"\)\n.*ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"/,
+	$log_offset);
+
+# Verify that an error is logged for the missing sequence ('regress_s4').
+$node_publisher->safe_psql('postgres', qq(DROP SEQUENCE regress_s4;));
+
+$node_subscriber->wait_for_log(
+	qr/WARNING: ( [A-Z0-9]+:)? missing sequence on publisher \("public.regress_s4"\)\n.*ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"/,
+	$log_offset);
 
 done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 018b5919cf6..2ca7b75af57 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -526,6 +526,7 @@ CopyMethod
 CopyMultiInsertBuffer
 CopyMultiInsertInfo
 CopyOnErrorChoice
+CopySeqResult
 CopySource
 CopyStmt
 CopyToRoutine
@@ -1629,6 +1630,7 @@ LogicalRepRelId
 LogicalRepRelMapEntry
 LogicalRepRelation
 LogicalRepRollbackPreparedTxnData
+LogicalRepSequenceInfo
 LogicalRepStreamAbortData
 LogicalRepTupleData
 LogicalRepTyp
-- 
2.43.0

v20251103-0003-Add-seq_sync_error_count-to-subscription-s.patchtext/x-patch; charset=US-ASCII; name=v20251103-0003-Add-seq_sync_error_count-to-subscription-s.patchDownload
From 211bed2464181d7656f797535e6b45082655f04c Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 3 Nov 2025 19:50:16 +0530
Subject: [PATCH v20251103 3/3] Add seq_sync_error_count to subscription
 statistics.

This commit introduces a new column seq_sync_error_count to subscription
statistics. The new field tracks the number of errors encountered during
sequence synchronization for each subscription.
---
 doc/src/sgml/monitoring.sgml                  |  9 +++
 src/backend/catalog/system_views.sql          |  1 +
 .../replication/logical/sequencesync.c        |  3 +
 src/backend/replication/logical/tablesync.c   |  3 +-
 src/backend/replication/logical/worker.c      | 18 ++---
 .../utils/activity/pgstat_subscription.c      | 27 +++++--
 src/backend/utils/adt/pgstatfuncs.c           | 27 ++++---
 src/include/catalog/pg_proc.dat               |  6 +-
 src/include/pgstat.h                          |  6 +-
 src/test/regress/expected/rules.out           |  3 +-
 src/test/subscription/t/026_stats.pl          | 80 ++++++++++++-------
 11 files changed, 123 insertions(+), 60 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index e3523ac882d..1dc0024ab92 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2193,6 +2193,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index dec8df4f8ee..059e8778ca7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1415,6 +1415,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.seq_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 623311b55ab..22e3018f1ea 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -710,6 +710,9 @@ start_sequence_sync()
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
 			PG_RE_THROW();
 		}
 	}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e5a2856fd17..dcc6124cc73 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1530,7 +1530,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index cbf311efd57..8b89eddb0cc 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -5606,7 +5606,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 MyLogicalRepWorker->type);
 
 			PG_RE_THROW();
 		}
@@ -5953,15 +5954,12 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	if (am_leader_apply_worker() || am_tablesync_worker())
-	{
-		/*
-		 * Report the worker failed during either table synchronization or
-		 * apply.
-		 */
-		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-										 !am_tablesync_worker());
-	}
+	/*
+	 * Report the worker failed during either sequence synchronization or
+	 * table synchronization or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+									 MyLogicalRepWorker->type);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..35916772b9d 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->seq_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(seq_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a710508979e..1521d6e2ab4 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2203,7 +2203,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2221,25 +2221,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "seq_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2256,6 +2258,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* seq_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->seq_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 34b7fddb0e7..5cf9e12fcb9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,seq_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 7ae503e71a2..a0610bb3e31 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -769,7 +772,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 77e25ca029e..fe20f613c3a 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.seq_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, seq_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..23f3511f9a4 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,45 +42,56 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' and seq_sync_error_count > 0 and sync_error_count > 0
 	])
 	  or die
 	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
 
+	# Change the sequence start value on the subscriber so that it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT count(1) = 1 FROM pg_subscription_rel
-	WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+	SELECT count(1) = 2 FROM pg_subscription_rel
+	WHERE srrelid IN ('$table_name'::regclass, '$sequence_name'::regclass) AND srsubstate in ('r', 's')
 	])
 	  or die
 	  qq(Timed out while waiting for subscriber to synchronize data for table '$table_name'.);
@@ -136,14 +149,17 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset
+# timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -151,8 +167,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Check that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
 # Reset a single subscription
@@ -160,10 +176,12 @@ $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats((SELECT subid FROM pg_stat_subscription_stats WHERE subname = '$sub1_name')))
 );
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -171,8 +189,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
 # Get reset timestamp
@@ -198,14 +216,17 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0
+# and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -213,18 +234,20 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
 # Reset all subscriptions
 $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats(NULL)));
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -232,13 +255,14 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -246,8 +270,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
 $reset_time1 = $node_subscriber->safe_psql($db,
-- 
2.43.0

v20251103-0002-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20251103-0002-Documentation-for-sequence-synchronization.patchDownload
From 3669acbb92c3d5cb46f179df57c7b127a0962100 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:18:07 +0530
Subject: [PATCH v20251103 2/3] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  14 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 224 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  19 +-
 7 files changed, 275 insertions(+), 28 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 06d1e4403b5..f5cbc68b938 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, or table/sequence synchronization
+        worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, and  table/sequence
+        synchronization workers.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5359,9 +5359,11 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
         during the subscription initialization or when new tables are added.
+        One additional worker is also needed for sequence synchronization.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..9e78c2f0465 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, sequences can be
+   synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,194 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   on the subscriber:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences currently known to the subscription.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    The sequence synchronization worker validates that sequence definitions
+    match between publisher and subscriber. If mismatches exist, the worker
+    logs an error identifying them and exits. The apply worker continues
+    respawning the sequence synchronization worker until synchronization
+    succeeds. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    Subscriber sequence values drift out of sync as the publisher advances
+    them. Compare values between publisher and subscriber, then run
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link> to
+    resynchronize if necessary.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+/* pub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+/* pub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+/* sub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+/* sub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+/* pub # */ CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+/* sub # */ CREATE SUBSCRIPTION sub1
+/* sub - */ CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+/* sub - */ PUBLICATION pub1;
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+/* sub # */ SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+/* sub # */ SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all sequences known to the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+/* sub # */ ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+
+/* sub # */ SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+/* sub # */ SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2280,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2616,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, and
+    table/sequence synchronization workers.
    </para>
 
    <para>
@@ -2437,8 +2630,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index f3bf527d5b4..e3523ac882d 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..d2ca3165f8a 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter has no effect for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter has no effect for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter has no effect
+          for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter has no effect for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

#464Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#463)
Re: Logical Replication of sequences

On Mon, Nov 3, 2025 at 8:46 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20251103 patch has the changes for the same.

I have pushed the 0001 after making minor adjustments in tests and at
a few other places. Kindly rebase and send the remaining patches.

--
With Regards,
Amit Kapila.

#465shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#464)
Re: Logical Replication of sequences

Please find a few comments on 003 patch (seqsync_error_count)

1)
+ /*
+ * Report the worker failed during either sequence synchronization or
+ * table synchronization or apply.
+ */

Shall we tweak it slightly to:
Report the worker failed during sequence synchronization, table
synchronization, or apply.

2)

+ SELECT count(1) = 1 FROM pg_stat_subscription_stats
+ WHERE subname = '$sub_name' and seq_sync_error_count > 0 and
sync_error_count > 0
  ])
    or die
    qq(Timed out while waiting for tablesync errors for subscription
'$sub_name');

Since we are checking both table-sync and seq-sync errors here, we
shall update the failure-message.

3)
+ # Change the sequence start value on the subscriber so that it
doesn't error out.
+ $node_subscriber->safe_psql($db,
+ qq(ALTER SEQUENCE $sequence_name INCREMENT 1));

Please mention in comment 'Change the sequence start value to default....'.
Otherwise it is not clear why changing to 1 is helping here as the
previous creation of seq on pub did not mention any 'INCREMENT' value
at all.

4)

- # Wait for initial tablesync to finish.
+ # Wait for initial sync to finish.
  $node_subscriber->poll_query_until(
  $db,
  qq[
- SELECT count(1) = 1 FROM pg_subscription_rel
- WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+ SELECT count(1) = 2 FROM pg_subscription_rel
+ WHERE srrelid IN ('$table_name'::regclass,
'$sequence_name'::regclass) AND srsubstate in ('r', 's')
  ])
    or die
    qq(Timed out while waiting for subscriber to synchronize data for
table '$table_name'.);

a) Will it be better to separate the 2 queries as the table-sync can
be 'r' and 's', while seq-sync has to be 'r'.

b) If we plan to keep the same as above, the failure-message needs to
be changed as it mentions only table.

thanks
Shveta

#466vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#464)
Re: Logical Replication of sequences

On Wed, 5 Nov 2025 at 13:58, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Nov 3, 2025 at 8:46 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20251103 patch has the changes for the same.

I have pushed the 0001 after making minor adjustments in tests and at
a few other places. Kindly rebase and send the remaining patches.

I noticed a buildfarm failure on prion at [1]https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&amp;dt=2025-11-05%2010%3A30%3A15.
The test failed on prion because it runs with the following additional
configuration:
log_error_verbosity = verbose

Due to this setting, the logs include an extra LOCATION line between
the WARNING and ERROR messages, which was not expected by the test:
2025-11-05 11:35:21.090 UTC [1357163:3] WARNING: 55000: mismatched or
renamed sequence on subscriber ("public.regress_s4")
2025-11-05 11:35:21.090 UTC [1357163:4] LOCATION:
report_sequence_errors, sequencesync.c:185
2025-11-05 11:35:21.090 UTC [1357163:5] ERROR: 55000: logical
replication sequence synchronization failed for subscription
"regress_seq_sub"

I'm working on a fix for this issue.

[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&amp;dt=2025-11-05%2010%3A30%3A15

Regards,
Vignesh

#467Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#466)
Re: Logical Replication of sequences

On Wed, Nov 5, 2025 at 5:57 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Nov 2025 at 13:58, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Nov 3, 2025 at 8:46 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20251103 patch has the changes for the same.

I have pushed the 0001 after making minor adjustments in tests and at
a few other places. Kindly rebase and send the remaining patches.

I noticed a buildfarm failure on prion at [1].
The test failed on prion because it runs with the following additional
configuration:
log_error_verbosity = verbose

Due to this setting, the logs include an extra LOCATION line between
the WARNING and ERROR messages, which was not expected by the test:
2025-11-05 11:35:21.090 UTC [1357163:3] WARNING: 55000: mismatched or
renamed sequence on subscriber ("public.regress_s4")
2025-11-05 11:35:21.090 UTC [1357163:4] LOCATION:
report_sequence_errors, sequencesync.c:185
2025-11-05 11:35:21.090 UTC [1357163:5] ERROR: 55000: logical
replication sequence synchronization failed for subscription
"regress_seq_sub"

I'm working on a fix for this issue.

We can fix it either by expecting just a WARNING for this test which
is sufficient. The other possibility is that we can expect some other
line(s) between WARNING and ERROR. I think just waiting for WARNING in
the log is sufficient as that serves the purpose of this test. What do
you think?

--
With Regards,
Amit Kapila.

#468vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#467)
1 attachment(s)
Re: Logical Replication of sequences

On Wed, 5 Nov 2025 at 18:10, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Nov 5, 2025 at 5:57 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, 5 Nov 2025 at 13:58, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Nov 3, 2025 at 8:46 PM vignesh C <vignesh21@gmail.com> wrote:

The attached v20251103 patch has the changes for the same.

I have pushed the 0001 after making minor adjustments in tests and at
a few other places. Kindly rebase and send the remaining patches.

I noticed a buildfarm failure on prion at [1].
The test failed on prion because it runs with the following additional
configuration:
log_error_verbosity = verbose

Due to this setting, the logs include an extra LOCATION line between
the WARNING and ERROR messages, which was not expected by the test:
2025-11-05 11:35:21.090 UTC [1357163:3] WARNING: 55000: mismatched or
renamed sequence on subscriber ("public.regress_s4")
2025-11-05 11:35:21.090 UTC [1357163:4] LOCATION:
report_sequence_errors, sequencesync.c:185
2025-11-05 11:35:21.090 UTC [1357163:5] ERROR: 55000: logical
replication sequence synchronization failed for subscription
"regress_seq_sub"

I'm working on a fix for this issue.

We can fix it either by expecting just a WARNING for this test which
is sufficient. The other possibility is that we can expect some other
line(s) between WARNING and ERROR. I think just waiting for WARNING in
the log is sufficient as that serves the purpose of this test. What do
you think?

I also think checking only for the WARNING message in the log is
sufficient to verify the test. The attached patch includes this
change.
Alternatively, we could check for the WARNING first and then verify
the ERROR separately if needed.
Thoughts?

Regards,
Vignesh

Attachments:

0001-Fix-tap-test-failure-to-handle-verbose-log-output.patchtext/x-patch; charset=US-ASCII; name=0001-Fix-tap-test-failure-to-handle-verbose-log-output.patchDownload
From 5af233ef814a2f834c1208431ae3023bc53e5abd Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Wed, 5 Nov 2025 18:51:24 +0530
Subject: [PATCH] Fix tap test failure to handle verbose log output

The test failed on buildfarm member prion because it runs with
log_error_verbosity = verbose, which adds an extra LOCATION: line
between the WARNING and ERROR messages. Update the test to only
check for the WARNING message to avoid verbosity dependent failures.
---
 src/test/subscription/t/036_sequences.pl | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index d34b0e4ae2f..8112f86d2dd 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -201,14 +201,14 @@ $node_subscriber->safe_psql('postgres',
 # Verify that an error is logged for parameter differences on sequence
 # ('regress_s4').
 $node_subscriber->wait_for_log(
-	qr/WARNING: ( [A-Z0-9]+:)? mismatched or renamed sequence on subscriber \("public.regress_s4"\)\n.*ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"/,
+	qr/WARNING: ( [A-Z0-9]+:)? mismatched or renamed sequence on subscriber \("public.regress_s4"\)/,
 	$log_offset);
 
 # Verify that an error is logged for the missing sequence ('regress_s4').
 $node_publisher->safe_psql('postgres', qq(DROP SEQUENCE regress_s4;));
 
 $node_subscriber->wait_for_log(
-	qr/WARNING: ( [A-Z0-9]+:)? missing sequence on publisher \("public.regress_s4"\)\n.*ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"/,
+	qr/WARNING: ( [A-Z0-9]+:)? missing sequence on publisher \("public.regress_s4"\)/,
 	$log_offset);
 
 done_testing();
-- 
2.43.0

#469vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#465)
2 attachment(s)
Re: Logical Replication of sequences

On Wed, 5 Nov 2025 at 15:11, shveta malik <shveta.malik@gmail.com> wrote:

Please find a few comments on 003 patch (seqsync_error_count)

1)
+ /*
+ * Report the worker failed during either sequence synchronization or
+ * table synchronization or apply.
+ */

Shall we tweak it slightly to:
Report the worker failed during sequence synchronization, table
synchronization, or apply.

Modified

2)

+ SELECT count(1) = 1 FROM pg_stat_subscription_stats
+ WHERE subname = '$sub_name' and seq_sync_error_count > 0 and
sync_error_count > 0
])
or die
qq(Timed out while waiting for tablesync errors for subscription
'$sub_name');

Since we are checking both table-sync and seq-sync errors here, we
shall update the failure-message.

Modified

3)
+ # Change the sequence start value on the subscriber so that it
doesn't error out.
+ $node_subscriber->safe_psql($db,
+ qq(ALTER SEQUENCE $sequence_name INCREMENT 1));

Please mention in comment 'Change the sequence start value to default....'.
Otherwise it is not clear why changing to 1 is helping here as the
previous creation of seq on pub did not mention any 'INCREMENT' value
at all.

Modified

4)

- # Wait for initial tablesync to finish.
+ # Wait for initial sync to finish.
$node_subscriber->poll_query_until(
$db,
qq[
- SELECT count(1) = 1 FROM pg_subscription_rel
- WHERE srrelid = '$table_name'::regclass AND srsubstate in ('r', 's')
+ SELECT count(1) = 2 FROM pg_subscription_rel
+ WHERE srrelid IN ('$table_name'::regclass,
'$sequence_name'::regclass) AND srsubstate in ('r', 's')
])
or die
qq(Timed out while waiting for subscriber to synchronize data for
table '$table_name'.);

a) Will it be better to separate the 2 queries as the table-sync can
be 'r' and 's', while seq-sync has to be 'r'.

b) If we plan to keep the same as above, the failure-message needs to
be changed as it mentions only table.

I have separated the query to check individually for table sync and
sequence sync.

The attached v20251105 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20251105-0002-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20251105-0002-Documentation-for-sequence-synchronization.patchDownload
From e8a1f0e18155126738fbdd1963c2680d991e08a7 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:18:07 +0530
Subject: [PATCH v20251105 2/2] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  14 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 224 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  27 ++-
 7 files changed, 279 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 06d1e4403b5..f5cbc68b938 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, or table/sequence synchronization
+        worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, and  table/sequence
+        synchronization workers.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5359,9 +5359,11 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
         during the subscription initialization or when new tables are added.
+        One additional worker is also needed for sequence synchronization.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index b01f5e998b2..9e78c2f0465 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, sequences can be
+   synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,194 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   on the subscriber:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences currently known to the subscription.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    The sequence synchronization worker validates that sequence definitions
+    match between publisher and subscriber. If mismatches exist, the worker
+    logs an error identifying them and exits. The apply worker continues
+    respawning the sequence synchronization worker until synchronization
+    succeeds. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    Subscriber sequence values drift out of sync as the publisher advances
+    them. Compare values between publisher and subscriber, then run
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link> to
+    resynchronize if necessary.
+   </para>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+/* pub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+/* pub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+/* sub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+/* sub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+/* pub # */ CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+/* sub # */ CREATE SUBSCRIPTION sub1
+/* sub - */ CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+/* sub - */ PUBLICATION pub1;
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+/* sub # */ SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+/* sub # */ SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all sequences known to the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+/* sub # */ ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+
+/* sub # */ SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+/* sub # */ SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2280,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2616,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, and
+    table/sequence synchronization workers.
    </para>
 
    <para>
@@ -2437,8 +2630,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 02e4400b4a8..1dc0024ab92 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..442d370b426 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -127,10 +127,10 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
 
          <para>
           Since no connection is made when this option is
-          <literal>false</literal>, no tables are subscribed. To initiate
-          replication, you must manually create the replication slot, enable
-          the failover if required, enable the subscription, and refresh the
-          subscription. See
+          <literal>false</literal>, no tables and sequences are subscribed. To
+          initiate replication, you must manually create the replication slot,
+          enable the failover if required, enable the subscription, and refresh
+          the subscription. See
           <xref linkend="logical-replication-subscription-examples-deferred-slot"/>
           for examples.
          </para>
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter has no effect for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter has no effect for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter has no effect
+          for sequences.
          </para>
 
          <para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter has no effect for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

v20251105-0001-Add-seq_sync_error_count-to-subscription-s.patchtext/x-patch; charset=US-ASCII; name=v20251105-0001-Add-seq_sync_error_count-to-subscription-s.patchDownload
From 74f67efcd95df7421c80a3fc160deb396152c783 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 3 Nov 2025 19:50:16 +0530
Subject: [PATCH v20251105 1/2] Add seq_sync_error_count to subscription
 statistics.

This commit introduces a new column seq_sync_error_count to subscription
statistics. The new field tracks the number of errors encountered during
sequence synchronization for each subscription.
---
 doc/src/sgml/monitoring.sgml                  |  9 ++
 src/backend/catalog/system_views.sql          |  1 +
 .../replication/logical/sequencesync.c        |  3 +
 src/backend/replication/logical/tablesync.c   |  3 +-
 src/backend/replication/logical/worker.c      | 18 ++--
 .../utils/activity/pgstat_subscription.c      | 27 ++++--
 src/backend/utils/adt/pgstatfuncs.c           | 27 +++---
 src/include/catalog/pg_proc.dat               |  6 +-
 src/include/pgstat.h                          |  6 +-
 src/test/regress/expected/rules.out           |  3 +-
 src/test/subscription/t/026_stats.pl          | 89 +++++++++++++------
 11 files changed, 133 insertions(+), 59 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index f3bf527d5b4..02e4400b4a8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2192,6 +2192,15 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>sequence_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index dec8df4f8ee..059e8778ca7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1415,6 +1415,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.seq_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 717c82328f2..161ff1177d9 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -725,6 +725,9 @@ start_sequence_sync()
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
 			PG_RE_THROW();
 		}
 	}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e5a2856fd17..dcc6124cc73 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1530,7 +1530,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index e1c757c911e..56658273ffc 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -5606,7 +5606,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 MyLogicalRepWorker->type);
 
 			PG_RE_THROW();
 		}
@@ -5953,15 +5954,12 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	if (am_leader_apply_worker() || am_tablesync_worker())
-	{
-		/*
-		 * Report the worker failed during either table synchronization or
-		 * apply.
-		 */
-		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-										 !am_tablesync_worker());
-	}
+	/*
+	 * Report the worker failed during sequence synchronization, table
+	 * synchronization, or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+									 MyLogicalRepWorker->type);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..35916772b9d 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->seq_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(seq_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a710508979e..1521d6e2ab4 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2203,7 +2203,7 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
@@ -2221,25 +2221,27 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 					   OIDOID, -1, 0);
 	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "seq_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
 	BlessTupleDesc(tupdesc);
 
@@ -2256,6 +2258,9 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* seq_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->seq_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 34b7fddb0e7..5cf9e12fcb9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,seq_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 7ae503e71a2..a0610bb3e31 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -769,7 +772,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2bf968ae3d3..7c52181cbcb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.seq_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, seq_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..38a6ec542cf 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,7 +21,8 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
 	# Initial table setup on both publisher and subscriber. On subscriber we
 	# create the same tables but with primary keys. Also, insert some data that
 	# will conflict with the data replicated from publisher later.
@@ -32,6 +33,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,40 +42,62 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' AND seq_sync_error_count > 0 AND sync_error_count > 0
+	])
+	  or die
+	  qq(Timed out while waiting for sequencesync errors and tablesync errors for subscription '$sub_name');
+
+	# Change the sequence start value back to the default on the subscriber so
+	# it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
+	# Wait for sequencesync to finish.
+	$node_subscriber->poll_query_until(
+		$db,
+		qq[
+	SELECT count(1) = 1 FROM pg_subscription_rel
+	WHERE srrelid = '$sequence_name'::regclass AND srsubstate = 'r'
 	])
 	  or die
-	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
+	  qq(Timed out while waiting for subscriber to synchronize data for sequence '$sequence_name'.);
 
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
 
-	# Wait for initial tablesync to finish.
+	# Wait for initial tablesync sync to finish.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
@@ -136,14 +160,17 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset
+# timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -151,8 +178,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Check that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
 # Reset a single subscription
@@ -160,10 +187,12 @@ $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats((SELECT subid FROM pg_stat_subscription_stats WHERE subname = '$sub1_name')))
 );
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -171,8 +200,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
 # Get reset timestamp
@@ -198,14 +227,17 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0
+# and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -213,18 +245,20 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
 # Reset all subscriptions
 $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats(NULL)));
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -232,13 +266,14 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -246,8 +281,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
 $reset_time1 = $node_subscriber->safe_psql($db,
-- 
2.43.0

#470Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#469)
Re: Logical Replication of sequences

Hi Vignesh,

Some review comments for patch v20251105-0001 (stats counter)

======
General.

1.
It was not obvious to me this counter is actually counting. Both the
commit message and the docs say it counts "errors" during sequence
synchronization. AFAIK the sequencesync worker processes sequences in
"batches" of ~100, so in reality even if there are a dozen different
sequences with problems, the worker combines them all and reports just
one actual ERROR, doesn't it? So this counter appears to be just an
indication that "something bad happened" during the synchronization.
Even a high count value doesn't mean there are lots of sequences with
errors; there might just be 1 sequence error that has remained
unaddressed and so keeps incrementing the count every time the
sequencesync reruns and re-fails, right?

Maybe the descriptions can clarify what the counter really means?

======
Commit message

2.
See general comment #1

======
doc/src/sgml/monitoring.sgml

sequence_sync_error_count:

3.
+      <para>
+       Number of times an error occurred during the sequence synchronization
+      </para></entry>

See general comment #1

~~~

sync_error_count:

4.
Now there are 2 kinds of synchronization workers -- tablesync and sequencesync.

But there was already a stats counter called "sync_error_count". Are
you worried that this name now seems ambiguous? (e.g. versus if it was
called "table_sync_error_count"). How can we prevent users from being
misled by this old generic looking name?

======
src/backend/catalog/system_views.sql

5.
ss.apply_error_count,
+ ss.seq_sync_error_count,
ss.sync_error_count,

The documentation said the new column was called "sequence_sync_error_count".

======
src/backend/utils/adt/pgstatfuncs.c

pg_stat_get_subscription_stats:

6.
- TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 3, "seq_sync_error_count",
     INT8OID, -1, 0);
- TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 4, "sync_error_count",
     INT8OID, -1, 0);
- TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_insert_exists",
     INT8OID, -1, 0);
- TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_origin_differs",
     INT8OID, -1, 0);
- TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_exists",
     INT8OID, -1, 0);
- TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_deleted",
     INT8OID, -1, 0);
- TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_update_missing",
     INT8OID, -1, 0);
- TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_origin_differs",
     INT8OID, -1, 0);
- TupleDescInitEntry(tupdesc, (AttrNumber) 11,
"confl_multiple_unique_conflicts",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_delete_missing",
     INT8OID, -1, 0);
- TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 12,
"confl_multiple_unique_conflicts",
+    INT8OID, -1, 0);
+ TupleDescInitEntry(tupdesc, (AttrNumber) 13, "stats_reset",

6a.
IMO now is a good opportunity to make more use of that 'i' variable,
instead of hardwiring all the column indexes 1,2,3...13. That way
would avoid all this code churn every time a new stats column gets
added in the future.

~

6b.
The documentation said the new column was called "sequence_sync_error_count".

======
src/test/subscription/t/026_stats.pl

7.
 sub create_sub_pub_w_errors
 {
- my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
+ my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+   = @_;
  # Initial table setup on both publisher and subscriber. On subscriber we
  # create the same tables but with primary keys. Also, insert some data that
  # will conflict with the data replicated from publisher later.

~

I think that comment should also mention that you've made a subscriber
SEQUENCE with a different INCREMENT to deliberately clash with the
publisher sequence of the same name.

Also typos:
/On subscriber/On the subscriber/
/from publisher/from the publisher/

~~~

8.
+ # Change the sequence start value back to the default on the subscriber so
+ # it doesn't error out.
+ $node_subscriber->safe_psql($db,
+ qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+

That comment is not quite right. e.g. you are changing the INCREMENT
to match, not the "start value".

~~~

9.
- # Wait for initial tablesync to finish.
+ # Wait for initial tablesync sync to finish.

"tablesync sync"?

Was this change needed at all? If any change is needed at all (I am
not saying it is) I thought it should say something more like: "Wait
for the initial tablesync to finish successfully."

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#471Shinya Kato
shinya11.kato@gmail.com
In reply to: Amit Kapila (#464)
1 attachment(s)
Re: Logical Replication of sequences

On Wed, Nov 5, 2025 at 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have pushed the 0001 after making minor adjustments in tests and at
a few other places. Kindly rebase and send the remaining patches.

I discovered that the sequence sync worker fails for sequences
containing single quotes.
---
2025-11-06 10:22:50.335 JST [1096008] ERROR: could not fetch sequence
information from the publisher: ERROR: syntax error at or near
"quote"
LINE 4: FROM ( VALUES ('public', 'regress'quote', 0), ('public', 'ho...
^
2025-11-06 10:22:50.335 JST [1088168] LOG: background worker "logical
replication sequencesync worker" (PID 1096008) exited with exit code 1
---

I haven't read all the threads, so I might be mistaken, but I've
created a patch.

--
Best regards,
Shinya Kato
NTT OSS Center

Attachments:

fix-logical-sequence-sync-quoted-names.diffapplication/octet-stream; name=fix-logical-sequence-sync-quoted-names.diffDownload
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 717c82328f2..c7cb236f2cb 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -60,6 +60,7 @@
 #include "replication/logicalworker.h"
 #include "replication/worker_internal.h"
 #include "utils/acl.h"
+#include "utils/builtins.h"
 #include "utils/fmgroids.h"
 #include "utils/guc.h"
 #include "utils/inval.h"
@@ -147,13 +148,18 @@ get_sequences_string(List *seqindexes, StringInfo buf)
 	resetStringInfo(buf);
 	foreach_int(seqidx, seqindexes)
 	{
+		char *qualified_name;
+
 		LogicalRepSequenceInfo *seqinfo =
 			(LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);
 
 		if (buf->len > 0)
 			appendStringInfoString(buf, ", ");
 
-		appendStringInfo(buf, "\"%s.%s\"", seqinfo->nspname, seqinfo->seqname);
+		qualified_name = quote_qualified_identifier(seqinfo->nspname,
+													seqinfo->seqname);
+		appendStringInfoString(buf, qualified_name);
+		pfree(qualified_name);
 	}
 }
 
@@ -407,14 +413,23 @@ copy_sequences(WalReceiverConn *conn)
 
 		for (int idx = cur_batch_base_index; idx < n_seqinfos; idx++)
 		{
+			char *nsp_literal;
+			char *seq_literal;
+
 			LogicalRepSequenceInfo *seqinfo =
 				(LogicalRepSequenceInfo *) list_nth(seqinfos, idx);
 
 			if (seqstr->len > 0)
 				appendStringInfoString(seqstr, ", ");
 
-			appendStringInfo(seqstr, "(\'%s\', \'%s\', %d)",
-							 seqinfo->nspname, seqinfo->seqname, idx);
+			nsp_literal = quote_literal_cstr(seqinfo->nspname);
+			seq_literal = quote_literal_cstr(seqinfo->seqname);
+
+			appendStringInfo(seqstr, "(%s, %s, %d)",
+							 nsp_literal, seq_literal, idx);
+
+			pfree(nsp_literal);
+			pfree(seq_literal);
 
 			if (++batch_size == MAX_SEQUENCES_SYNC_PER_BATCH)
 				break;
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index d34b0e4ae2f..50d8f9d6ba5 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -32,6 +32,7 @@ $ddl = qq(
 	CREATE SEQUENCE regress_s1;
 	CREATE SEQUENCE regress_s2;
 	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE "regress'quote";
 );
 $node_subscriber->safe_psql('postgres', $ddl);
 
@@ -201,14 +202,39 @@ $node_subscriber->safe_psql('postgres',
 # Verify that an error is logged for parameter differences on sequence
 # ('regress_s4').
 $node_subscriber->wait_for_log(
-	qr/WARNING: ( [A-Z0-9]+:)? mismatched or renamed sequence on subscriber \("public.regress_s4"\)\n.*ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"/,
+	qr/WARNING: ( [A-Z0-9]+:)? mismatched or renamed sequence on subscriber \(.*regress_s4.*\)\n.*ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"/,
 	$log_offset);
 
 # Verify that an error is logged for the missing sequence ('regress_s4').
 $node_publisher->safe_psql('postgres', qq(DROP SEQUENCE regress_s4;));
 
 $node_subscriber->wait_for_log(
-	qr/WARNING: ( [A-Z0-9]+:)? missing sequence on publisher \("public.regress_s4"\)\n.*ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"/,
+	qr/WARNING: ( [A-Z0-9]+:)? missing sequence on publisher \(.*regress_s4.*\)\n.*ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"/,
 	$log_offset);
 
+##########
+# Sequences with quotes in the name should be synchronized correctly.
+##########
+
+$node_publisher->safe_psql(
+	'postgres', qq(
+	CREATE SEQUENCE "regress'quote";
+	INSERT INTO regress_seq_test
+	SELECT nextval('"regress''quote"') FROM generate_series(1,100);
+));
+
+$node_subscriber->safe_psql(
+	'postgres', qq(
+	ALTER SUBSCRIPTION regress_seq_sub REFRESH PUBLICATION;
+));
+$node_subscriber->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for subscriber to synchronize data";
+
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, is_called FROM "regress'quote";
+));
+is($result, '100|t',
+	'REFRESH PUBLICATION will sync sequences with quoted names');
+
 done_testing();
#472Amit Kapila
amit.kapila16@gmail.com
In reply to: Shinya Kato (#471)
Re: Logical Replication of sequences

On Thu, Nov 6, 2025 at 8:00 AM Shinya Kato <shinya11.kato@gmail.com> wrote:

I discovered that the sequence sync worker fails for sequences
containing single quotes.
---
2025-11-06 10:22:50.335 JST [1096008] ERROR: could not fetch sequence
information from the publisher: ERROR: syntax error at or near
"quote"
LINE 4: FROM ( VALUES ('public', 'regress'quote', 0), ('public', 'ho...
^
2025-11-06 10:22:50.335 JST [1088168] LOG: background worker "logical
replication sequencesync worker" (PID 1096008) exited with exit code 1
---

I haven't read all the threads, so I might be mistaken, but I've
created a patch.

Thanks for spotting the issue and providing a fix. Here are few comments:

1.
+ nsp_literal = quote_literal_cstr(seqinfo->nspname);
+ seq_literal = quote_literal_cstr(seqinfo->seqname);
+
+ appendStringInfo(seqstr, "(%s, %s, %d)",
+ nsp_literal, seq_literal, idx);
+
+ pfree(nsp_literal);
+ pfree(seq_literal);

We don't need this retail pfree as the current memory context at this
place will be TopTransactionContext that will anyway be freed after a
batch of sequences.

2.
@@ -147,13 +148,18 @@ get_sequences_string(List *seqindexes, StringInfo buf)
resetStringInfo(buf);
foreach_int(seqidx, seqindexes)
{
+ char *qualified_name;
+
LogicalRepSequenceInfo *seqinfo =
(LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);

if (buf->len > 0)
appendStringInfoString(buf, ", ");

- appendStringInfo(buf, "\"%s.%s\"", seqinfo->nspname, seqinfo->seqname);
+ qualified_name = quote_qualified_identifier(seqinfo->nspname,
+ seqinfo->seqname);
+ appendStringInfoString(buf, qualified_name);

The function get_sequences_string() is used in WARNING code path and
normally we don't quote names in messages. For example, see following
cases:

parserOpenTable:
ereport(ERROR,
    (errcode(ERRCODE_UNDEFINED_TABLE),
    errmsg("relation \"%s.%s\" does not exist",
        relation->schemaname, relation->relname)));

RangeVarGetRelidExtended:
        ereport(elevel,
            (errcode(ERRCODE_LOCK_NOT_AVAILABLE),
             errmsg("could not obtain lock on relation \"%s.%s\"",
                relation->schemaname, relation->relname)));

3. Also, see, if the newly added test can be combined with other existing tests.

--
With Regards,
Amit Kapila.

#473vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#472)
1 attachment(s)
Re: Logical Replication of sequences

On Thu, 6 Nov 2025 at 10:10, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Nov 6, 2025 at 8:00 AM Shinya Kato <shinya11.kato@gmail.com> wrote:

I discovered that the sequence sync worker fails for sequences
containing single quotes.
---
2025-11-06 10:22:50.335 JST [1096008] ERROR: could not fetch sequence
information from the publisher: ERROR: syntax error at or near
"quote"
LINE 4: FROM ( VALUES ('public', 'regress'quote', 0), ('public', 'ho...
^
2025-11-06 10:22:50.335 JST [1088168] LOG: background worker "logical
replication sequencesync worker" (PID 1096008) exited with exit code 1
---

I haven't read all the threads, so I might be mistaken, but I've
created a patch.

Thanks Kato-san for finding this issue and sharing the changes.

1.
+ nsp_literal = quote_literal_cstr(seqinfo->nspname);
+ seq_literal = quote_literal_cstr(seqinfo->seqname);
+
+ appendStringInfo(seqstr, "(%s, %s, %d)",
+ nsp_literal, seq_literal, idx);
+
+ pfree(nsp_literal);
+ pfree(seq_literal);

We don't need this retail pfree as the current memory context at this
place will be TopTransactionContext that will anyway be freed after a
batch of sequences.

Modified

2.
@@ -147,13 +148,18 @@ get_sequences_string(List *seqindexes, StringInfo buf)
resetStringInfo(buf);
foreach_int(seqidx, seqindexes)
{
+ char *qualified_name;
+
LogicalRepSequenceInfo *seqinfo =
(LogicalRepSequenceInfo *) list_nth(seqinfos, seqidx);

if (buf->len > 0)
appendStringInfoString(buf, ", ");

- appendStringInfo(buf, "\"%s.%s\"", seqinfo->nspname, seqinfo->seqname);
+ qualified_name = quote_qualified_identifier(seqinfo->nspname,
+ seqinfo->seqname);
+ appendStringInfoString(buf, qualified_name);

The function get_sequences_string() is used in WARNING code path and
normally we don't quote names in messages. For example, see following
cases:

Agree on this.

3. Also, see, if the newly added test can be combined with other existing tests.

Modified

The patch also includes the change for buildfarm failure at [1]https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&amp;dt=2025-11-05%2010%3A30%3A15.
[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&amp;dt=2025-11-05%2010%3A30%3A15

The attached v2 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v2-0001-Fix-tap-test-failure-with-verbose-log-output-and-.patchapplication/octet-stream; name=v2-0001-Fix-tap-test-failure-with-verbose-log-output-and-.patchDownload
From 4eed90c39041f34f4d0cb4db40d876faa4b1d1b0 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 6 Nov 2025 10:24:31 +0530
Subject: [PATCH v2] Fix tap test failure with verbose log output and quoting
 issue in sequence queries

The tap test failed on buildfarm member prion because it runs with
log_error_verbosity = verbose, which adds an extra LOCATION: line between
the WARNING and ERROR messages. Update the test to check only for the
WARNING message to make it independent of verbosity settings.

Also fix an issue where sequences with quotes in their names failed during
remote information fetch, as quote_literal_cstr() was not used when
querying sequence information.
---
 src/backend/replication/logical/sequencesync.c | 11 +++++++++--
 src/test/subscription/t/036_sequences.pl       | 14 ++++++++++++--
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index 717c82328f2..a8a39bec508 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -60,6 +60,7 @@
 #include "replication/logicalworker.h"
 #include "replication/worker_internal.h"
 #include "utils/acl.h"
+#include "utils/builtins.h"
 #include "utils/fmgroids.h"
 #include "utils/guc.h"
 #include "utils/inval.h"
@@ -407,14 +408,20 @@ copy_sequences(WalReceiverConn *conn)
 
 		for (int idx = cur_batch_base_index; idx < n_seqinfos; idx++)
 		{
+			char	   *nspname_literal;
+			char	   *seqname_literal;
+
 			LogicalRepSequenceInfo *seqinfo =
 				(LogicalRepSequenceInfo *) list_nth(seqinfos, idx);
 
 			if (seqstr->len > 0)
 				appendStringInfoString(seqstr, ", ");
 
-			appendStringInfo(seqstr, "(\'%s\', \'%s\', %d)",
-							 seqinfo->nspname, seqinfo->seqname, idx);
+			nspname_literal = quote_literal_cstr(seqinfo->nspname);
+			seqname_literal = quote_literal_cstr(seqinfo->seqname);
+
+			appendStringInfo(seqstr, "(%s, %s, %d)",
+							 nspname_literal, seqname_literal, idx);
 
 			if (++batch_size == MAX_SEQUENCES_SYNC_PER_BATCH)
 				break;
diff --git a/src/test/subscription/t/036_sequences.pl b/src/test/subscription/t/036_sequences.pl
index d34b0e4ae2f..9d6c9e8b200 100644
--- a/src/test/subscription/t/036_sequences.pl
+++ b/src/test/subscription/t/036_sequences.pl
@@ -22,6 +22,7 @@ $node_subscriber->start;
 my $ddl = qq(
 	CREATE TABLE regress_seq_test (v BIGINT);
 	CREATE SEQUENCE regress_s1;
+	CREATE SEQUENCE "regress'quote";
 );
 $node_publisher->safe_psql('postgres', $ddl);
 
@@ -32,6 +33,7 @@ $ddl = qq(
 	CREATE SEQUENCE regress_s1;
 	CREATE SEQUENCE regress_s2;
 	CREATE SEQUENCE regress_s3;
+	CREATE SEQUENCE "regress'quote";
 );
 $node_subscriber->safe_psql('postgres', $ddl);
 
@@ -40,6 +42,7 @@ $node_publisher->safe_psql(
 	'postgres', qq(
 	-- generate a number of values using the sequence
 	INSERT INTO regress_seq_test SELECT nextval('regress_s1') FROM generate_series(1,100);
+	INSERT INTO regress_seq_test SELECT nextval('"regress''quote"') FROM generate_series(1,100);
 ));
 
 # Setup logical replication pub/sub
@@ -63,6 +66,13 @@ my $result = $node_subscriber->safe_psql(
 ));
 is($result, '100|t', 'initial test data replicated');
 
+$result = $node_subscriber->safe_psql(
+	'postgres', qq(
+	SELECT last_value, is_called FROM "regress'quote";
+));
+is($result, '100|t',
+	'initial test data replicated for sequence name having quotes');
+
 ##########
 ## ALTER SUBSCRIPTION ... REFRESH PUBLICATION should cause sync of new
 # sequences of the publisher, but changes to existing sequences should
@@ -201,14 +211,14 @@ $node_subscriber->safe_psql('postgres',
 # Verify that an error is logged for parameter differences on sequence
 # ('regress_s4').
 $node_subscriber->wait_for_log(
-	qr/WARNING: ( [A-Z0-9]+:)? mismatched or renamed sequence on subscriber \("public.regress_s4"\)\n.*ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"/,
+	qr/WARNING: ( [A-Z0-9]+:)? mismatched or renamed sequence on subscriber \("public.regress_s4"\)/,
 	$log_offset);
 
 # Verify that an error is logged for the missing sequence ('regress_s4').
 $node_publisher->safe_psql('postgres', qq(DROP SEQUENCE regress_s4;));
 
 $node_subscriber->wait_for_log(
-	qr/WARNING: ( [A-Z0-9]+:)? missing sequence on publisher \("public.regress_s4"\)\n.*ERROR: ( [A-Z0-9]+:)? logical replication sequence synchronization failed for subscription "regress_seq_sub"/,
+	qr/WARNING: ( [A-Z0-9]+:)? missing sequence on publisher \("public.regress_s4"\)/,
 	$log_offset);
 
 done_testing();
-- 
2.43.0

#474Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#473)
Re: Logical Replication of sequences

On Thu, Nov 6, 2025 at 10:48 AM vignesh C <vignesh21@gmail.com> wrote:

The patch also includes the change for buildfarm failure at [1].
[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&amp;dt=2025-11-05%2010%3A30%3A15

The attached v2 version patch has the changes for the same.

Thanks for the patch, Pushed.

--
With Regards,
Amit Kapila.

#475vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#474)
2 attachment(s)
Re: Logical Replication of sequences

On Thu, 6 Nov 2025 at 16:07, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Nov 6, 2025 at 10:48 AM vignesh C <vignesh21@gmail.com> wrote:

The patch also includes the change for buildfarm failure at [1].
[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&amp;dt=2025-11-05%2010%3A30%3A15

The attached v2 version patch has the changes for the same.

Thanks for the patch, Pushed.

Thanks for pushing the patch, here is a rebased version of the
remaining patches.

This patch also addresses Peter's comments at [1 except the 4th
comment (renaming existing column) for which I will post a patch
separately.
[1]: /messages/by-id/CAHut+PtoLN0bRu7bNiSeF04dQQecoW-EXKMBX=Hy0uqCvQa8MA@mail.gmail.com

Regards,
Vignesh

Attachments:

v20251107-0001-Add-seq_sync_error_count-to-subscription-s.patchtext/x-patch; charset=US-ASCII; name=v20251107-0001-Add-seq_sync_error_count-to-subscription-s.patchDownload
From f2d8c32e84c35302ce2153d7baf19b2c4a4fc51f Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 3 Nov 2025 19:50:16 +0530
Subject: [PATCH v20251107] Add seq_sync_error_count to subscription
 statistics.

This commit adds a new column, seq_sync_error_count, to the subscription
statistics. This counter tracks the number of errors reported by the
sequence synchronization worker. Since a single worker handles the
synchronization of all sequences, this value may reflect errors from
multiple sequences.
---
 doc/src/sgml/monitoring.sgml                  | 11 +++
 src/backend/catalog/system_views.sql          |  1 +
 .../replication/logical/sequencesync.c        |  3 +
 src/backend/replication/logical/tablesync.c   |  3 +-
 src/backend/replication/logical/worker.c      | 18 ++--
 .../utils/activity/pgstat_subscription.c      | 27 ++++-
 src/backend/utils/adt/pgstatfuncs.c           | 39 +++++---
 src/include/catalog/pg_proc.dat               |  6 +-
 src/include/pgstat.h                          |  6 +-
 src/test/regress/expected/rules.out           |  3 +-
 src/test/subscription/t/026_stats.pl          | 98 +++++++++++++------
 11 files changed, 151 insertions(+), 64 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index fc64df43e3f..2741c138593 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2192,6 +2192,17 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>seq_sync_error_count</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of times an error occurred in the sequence synchronization
+       worker. A single worker synchronizes all sequences, so one error
+       increment may represent failures across multiple sequences.
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>sync_error_count</structfield> <type>bigint</type>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index dec8df4f8ee..059e8778ca7 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1415,6 +1415,7 @@ CREATE VIEW pg_stat_subscription_stats AS
         ss.subid,
         s.subname,
         ss.apply_error_count,
+        ss.seq_sync_error_count,
         ss.sync_error_count,
         ss.confl_insert_exists,
         ss.confl_update_origin_differs,
diff --git a/src/backend/replication/logical/sequencesync.c b/src/backend/replication/logical/sequencesync.c
index a8a39bec508..e093e65e540 100644
--- a/src/backend/replication/logical/sequencesync.c
+++ b/src/backend/replication/logical/sequencesync.c
@@ -732,6 +732,9 @@ start_sequence_sync()
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_SEQUENCESYNC);
+
 			PG_RE_THROW();
 		}
 	}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e5a2856fd17..dcc6124cc73 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1530,7 +1530,8 @@ start_table_sync(XLogRecPtr *origin_startpos, char **slotname)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, false);
+			pgstat_report_subscription_error(MySubscription->oid,
+											 WORKERTYPE_TABLESYNC);
 
 			PG_RE_THROW();
 		}
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 28f61f96a1a..93970c6af29 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -5606,7 +5606,8 @@ start_apply(XLogRecPtr origin_startpos)
 			 * idle state.
 			 */
 			AbortOutOfAnyTransaction();
-			pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
+			pgstat_report_subscription_error(MySubscription->oid,
+											 MyLogicalRepWorker->type);
 
 			PG_RE_THROW();
 		}
@@ -5953,15 +5954,12 @@ DisableSubscriptionAndExit(void)
 
 	RESUME_INTERRUPTS();
 
-	if (am_leader_apply_worker() || am_tablesync_worker())
-	{
-		/*
-		 * Report the worker failed during either table synchronization or
-		 * apply.
-		 */
-		pgstat_report_subscription_error(MyLogicalRepWorker->subid,
-										 !am_tablesync_worker());
-	}
+	/*
+	 * Report the worker failed during sequence synchronization, table
+	 * synchronization, or apply.
+	 */
+	pgstat_report_subscription_error(MyLogicalRepWorker->subid,
+									 MyLogicalRepWorker->type);
 
 	/* Disable the subscription */
 	StartTransactionCommand();
diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c
index f9a1c831a07..35916772b9d 100644
--- a/src/backend/utils/activity/pgstat_subscription.c
+++ b/src/backend/utils/activity/pgstat_subscription.c
@@ -17,6 +17,7 @@
 
 #include "postgres.h"
 
+#include "replication/worker_internal.h"
 #include "utils/pgstat_internal.h"
 
 
@@ -24,7 +25,7 @@
  * Report a subscription error.
  */
 void
-pgstat_report_subscription_error(Oid subid, bool is_apply_error)
+pgstat_report_subscription_error(Oid subid, LogicalRepWorkerType wtype)
 {
 	PgStat_EntryRef *entry_ref;
 	PgStat_BackendSubEntry *pending;
@@ -33,10 +34,25 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)
 										  InvalidOid, subid, NULL);
 	pending = entry_ref->pending;
 
-	if (is_apply_error)
-		pending->apply_error_count++;
-	else
-		pending->sync_error_count++;
+	switch (wtype)
+	{
+		case WORKERTYPE_APPLY:
+			pending->apply_error_count++;
+			break;
+
+		case WORKERTYPE_SEQUENCESYNC:
+			pending->seq_sync_error_count++;
+			break;
+
+		case WORKERTYPE_TABLESYNC:
+			pending->sync_error_count++;
+			break;
+
+		default:
+			/* Should never happen. */
+			Assert(0);
+			break;
+	}
 }
 
 /*
@@ -115,6 +131,7 @@ pgstat_subscription_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 
 #define SUB_ACC(fld) shsubent->stats.fld += localent->fld
 	SUB_ACC(apply_error_count);
+	SUB_ACC(seq_sync_error_count);
 	SUB_ACC(sync_error_count);
 	for (int i = 0; i < CONFLICT_NUM_TYPES; i++)
 		SUB_ACC(conflict_count[i]);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a710508979e..530d5e6a427 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -2203,44 +2203,49 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
 Datum
 pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 {
-#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	12
+#define PG_STAT_GET_SUBSCRIPTION_STATS_COLS	13
 	Oid			subid = PG_GETARG_OID(0);
 	TupleDesc	tupdesc;
 	Datum		values[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
 	bool		nulls[PG_STAT_GET_SUBSCRIPTION_STATS_COLS] = {0};
 	PgStat_StatSubEntry *subentry;
 	PgStat_StatSubEntry allzero;
-	int			i = 0;
+	int			i = 1;
 
 	/* Get subscription stats */
 	subentry = pgstat_fetch_stat_subscription(subid);
 
 	/* Initialise attributes information in the tuple descriptor */
 	tupdesc = CreateTemplateTupleDesc(PG_STAT_GET_SUBSCRIPTION_STATS_COLS);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "subid",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "subid",
 					   OIDOID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "apply_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "apply_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "sync_error_count",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "seq_sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "confl_insert_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "sync_error_count",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 5, "confl_update_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "confl_insert_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 6, "confl_update_exists",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "confl_update_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 7, "confl_update_deleted",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "confl_update_exists",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 8, "confl_update_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "confl_update_deleted",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 9, "confl_delete_origin_differs",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "confl_update_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 10, "confl_delete_missing",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "confl_delete_origin_differs",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 11, "confl_multiple_unique_conflicts",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "confl_delete_missing",
 					   INT8OID, -1, 0);
-	TupleDescInitEntry(tupdesc, (AttrNumber) 12, "stats_reset",
+	TupleDescInitEntry(tupdesc, (AttrNumber) i++, "confl_multiple_unique_conflicts",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) i, "stats_reset",
 					   TIMESTAMPTZOID, -1, 0);
+
+	Assert(i == PG_STAT_GET_SUBSCRIPTION_STATS_COLS);
+
 	BlessTupleDesc(tupdesc);
 
 	if (!subentry)
@@ -2250,12 +2255,18 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)
 		subentry = &allzero;
 	}
 
+	/* Reset index for building the result tuple values */
+	i = 0;
+
 	/* subid */
 	values[i++] = ObjectIdGetDatum(subid);
 
 	/* apply_error_count */
 	values[i++] = Int64GetDatum(subentry->apply_error_count);
 
+	/* seq_sync_error_count */
+	values[i++] = Int64GetDatum(subentry->seq_sync_error_count);
+
 	/* sync_error_count */
 	values[i++] = Int64GetDatum(subentry->sync_error_count);
 
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 34b7fddb0e7..5cf9e12fcb9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5704,9 +5704,9 @@
 { oid => '6231', descr => 'statistics: information about subscription stats',
   proname => 'pg_stat_get_subscription_stats', provolatile => 's',
   proparallel => 'r', prorettype => 'record', proargtypes => 'oid',
-  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
-  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o}',
-  proargnames => '{subid,subid,apply_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
+  proallargtypes => '{oid,oid,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,int8,timestamptz}',
+  proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o}',
+  proargnames => '{subid,subid,apply_error_count,seq_sync_error_count,sync_error_count,confl_insert_exists,confl_update_origin_differs,confl_update_exists,confl_update_deleted,confl_update_missing,confl_delete_origin_differs,confl_delete_missing,confl_multiple_unique_conflicts,stats_reset}',
   prosrc => 'pg_stat_get_subscription_stats' },
 { oid => '6118', descr => 'statistics: information about subscription',
   proname => 'pg_stat_get_subscription', prorows => '10', proisstrict => 'f',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 7ae503e71a2..a0610bb3e31 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -16,6 +16,7 @@
 #include "portability/instr_time.h"
 #include "postmaster/pgarch.h"	/* for MAX_XFN_CHARS */
 #include "replication/conflict.h"
+#include "replication/worker_internal.h"
 #include "utils/backend_progress.h" /* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/backend_status.h"	/* for backward compatibility */	/* IWYU pragma: export */
 #include "utils/pgstat_kind.h"
@@ -108,6 +109,7 @@ typedef struct PgStat_FunctionCallUsage
 typedef struct PgStat_BackendSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 } PgStat_BackendSubEntry;
@@ -416,6 +418,7 @@ typedef struct PgStat_SLRUStats
 typedef struct PgStat_StatSubEntry
 {
 	PgStat_Counter apply_error_count;
+	PgStat_Counter seq_sync_error_count;
 	PgStat_Counter sync_error_count;
 	PgStat_Counter conflict_count[CONFLICT_NUM_TYPES];
 	TimestampTz stat_reset_timestamp;
@@ -769,7 +772,8 @@ extern PgStat_SLRUStats *pgstat_fetch_slru(void);
  * Functions in pgstat_subscription.c
  */
 
-extern void pgstat_report_subscription_error(Oid subid, bool is_apply_error);
+extern void pgstat_report_subscription_error(Oid subid,
+											 LogicalRepWorkerType wtype);
 extern void pgstat_report_subscription_conflict(Oid subid, ConflictType type);
 extern void pgstat_create_subscription(Oid subid);
 extern void pgstat_drop_subscription(Oid subid);
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2bf968ae3d3..7c52181cbcb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2191,6 +2191,7 @@ pg_stat_subscription| SELECT su.oid AS subid,
 pg_stat_subscription_stats| SELECT ss.subid,
     s.subname,
     ss.apply_error_count,
+    ss.seq_sync_error_count,
     ss.sync_error_count,
     ss.confl_insert_exists,
     ss.confl_update_origin_differs,
@@ -2202,7 +2203,7 @@ pg_stat_subscription_stats| SELECT ss.subid,
     ss.confl_multiple_unique_conflicts,
     ss.stats_reset
    FROM pg_subscription s,
-    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
+    LATERAL pg_stat_get_subscription_stats(s.oid) ss(subid, apply_error_count, seq_sync_error_count, sync_error_count, confl_insert_exists, confl_update_origin_differs, confl_update_exists, confl_update_deleted, confl_update_missing, confl_delete_origin_differs, confl_delete_missing, confl_multiple_unique_conflicts, stats_reset);
 pg_stat_sys_indexes| SELECT relid,
     indexrelid,
     schemaname,
diff --git a/src/test/subscription/t/026_stats.pl b/src/test/subscription/t/026_stats.pl
index 00a1c2fcd48..fc0bcee5187 100644
--- a/src/test/subscription/t/026_stats.pl
+++ b/src/test/subscription/t/026_stats.pl
@@ -21,10 +21,16 @@ $node_subscriber->start;
 
 sub create_sub_pub_w_errors
 {
-	my ($node_publisher, $node_subscriber, $db, $table_name) = @_;
-	# Initial table setup on both publisher and subscriber. On subscriber we
-	# create the same tables but with primary keys. Also, insert some data that
-	# will conflict with the data replicated from publisher later.
+	my ($node_publisher, $node_subscriber, $db, $table_name, $sequence_name)
+	  = @_;
+	# Initial table and sequence setup on both publisher and subscriber.
+	#
+	# Tables: Created on both nodes, but the subscriber version includes
+	# primary keys and pre-populated data that will intentionally conflict with
+	# replicated data from the publisher.
+	#
+	# Sequences: Created on both nodes with different INCREMENT values to
+	# intentionally trigger replication conflicts.
 	$node_publisher->safe_psql(
 		$db,
 		qq[
@@ -32,6 +38,7 @@ sub create_sub_pub_w_errors
 	CREATE TABLE $table_name(a int);
 	ALTER TABLE $table_name REPLICA IDENTITY FULL;
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name;
 	COMMIT;
 	]);
 	$node_subscriber->safe_psql(
@@ -40,35 +47,57 @@ sub create_sub_pub_w_errors
 	BEGIN;
 	CREATE TABLE $table_name(a int primary key);
 	INSERT INTO $table_name VALUES (1);
+	CREATE SEQUENCE $sequence_name INCREMENT BY 10;
 	COMMIT;
 	]);
 
 	# Set up publication.
 	my $pub_name = $table_name . '_pub';
+	my $pub_seq_name = $sequence_name . '_pub';
 	my $publisher_connstr = $node_publisher->connstr . qq( dbname=$db);
 
-	$node_publisher->safe_psql($db,
-		qq(CREATE PUBLICATION $pub_name FOR TABLE $table_name));
+	$node_publisher->safe_psql(
+		$db,
+		qq[
+	CREATE PUBLICATION $pub_name FOR TABLE $table_name;
+	CREATE PUBLICATION $pub_seq_name FOR ALL SEQUENCES;
+	]);
 
 	# Create subscription. The tablesync for table on subscription will enter into
-	# infinite error loop due to violating the unique constraint.
+	# infinite error loop due to violating the unique constraint. The sequencesync
+	# will also fail due to different sequence increment values on publisher and
+	# subscriber.
 	my $sub_name = $table_name . '_sub';
 	$node_subscriber->safe_psql($db,
-		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name)
+		qq(CREATE SUBSCRIPTION $sub_name CONNECTION '$publisher_connstr' PUBLICATION $pub_name, $pub_seq_name)
 	);
 
 	$node_publisher->wait_for_catchup($sub_name);
 
-	# Wait for the tablesync error to be reported.
+	# Wait for the tablesync and sequencesync error to be reported.
 	$node_subscriber->poll_query_until(
 		$db,
 		qq[
-	SELECT sync_error_count > 0
-	FROM pg_stat_subscription_stats
-	WHERE subname = '$sub_name'
+	SELECT count(1) = 1 FROM pg_stat_subscription_stats
+	WHERE subname = '$sub_name' AND seq_sync_error_count > 0 AND sync_error_count > 0
+	])
+	  or die
+	  qq(Timed out while waiting for sequencesync errors and tablesync errors for subscription '$sub_name');
+
+	# Change the sequence INCREMENT value back to the default on the subscriber
+	# so it doesn't error out.
+	$node_subscriber->safe_psql($db,
+		qq(ALTER SEQUENCE $sequence_name INCREMENT 1));
+
+	# Wait for sequencesync to finish.
+	$node_subscriber->poll_query_until(
+		$db,
+		qq[
+	SELECT count(1) = 1 FROM pg_subscription_rel
+	WHERE srrelid = '$sequence_name'::regclass AND srsubstate = 'r'
 	])
 	  or die
-	  qq(Timed out while waiting for tablesync errors for subscription '$sub_name');
+	  qq(Timed out while waiting for subscriber to synchronize data for sequence '$sequence_name'.);
 
 	# Truncate test_tab1 so that tablesync worker can continue.
 	$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));
@@ -136,14 +165,17 @@ is($result, qq(0),
 
 # Create the publication and subscription with sync and apply errors
 my $table1_name = 'test_tab1';
+my $sequence1_name = 'test_seq1';
 my ($pub1_name, $sub1_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table1_name);
+	$table1_name, $sequence1_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset
+# timestamp is NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -151,8 +183,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Check that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Check that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for subscription '$sub1_name'.)
 );
 
 # Reset a single subscription
@@ -160,10 +192,12 @@ $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats((SELECT subid FROM pg_stat_subscription_stats WHERE subname = '$sub1_name')))
 );
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -171,8 +205,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL after reset for subscription '$sub1_name'.)
 );
 
 # Get reset timestamp
@@ -198,14 +232,17 @@ is( $node_subscriber->safe_psql(
 
 # Make second subscription and publication
 my $table2_name = 'test_tab2';
+my $sequence2_name = 'test_seq2';
 my ($pub2_name, $sub2_name) =
   create_sub_pub_w_errors($node_publisher, $node_subscriber, $db,
-	$table2_name);
+	$table2_name, $sequence2_name);
 
-# Apply errors, sync errors, and conflicts are > 0 and stats_reset timestamp is NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are > 0
+# and stats_reset timestamp is NULL
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count > 0,
+	seq_sync_error_count > 0,
 	sync_error_count > 0,
 	confl_insert_exists > 0,
 	confl_delete_missing > 0,
@@ -213,18 +250,20 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are > 0 and stats_reset is NULL for sub '$sub2_name'.)
 );
 
 # Reset all subscriptions
 $node_subscriber->safe_psql($db,
 	qq(SELECT pg_stat_reset_subscription_stats(NULL)));
 
-# Apply errors, sync errors, and conflicts are 0 and stats_reset timestamp is not NULL
+# Apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and
+# stats_reset timestamp is not NULL.
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -232,13 +271,14 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub1_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub1_name' after reset.)
 );
 
 is( $node_subscriber->safe_psql(
 		$db,
 		qq(SELECT apply_error_count = 0,
+	seq_sync_error_count = 0,
 	sync_error_count = 0,
 	confl_insert_exists = 0,
 	confl_delete_missing = 0,
@@ -246,8 +286,8 @@ is( $node_subscriber->safe_psql(
 	FROM pg_stat_subscription_stats
 	WHERE subname = '$sub2_name')
 	),
-	qq(t|t|t|t|t),
-	qq(Confirm that apply errors, sync errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
+	qq(t|t|t|t|t|t),
+	qq(Confirm that apply errors, sequencesync errors, tablesync errors, errors, and conflicts are 0 and stats_reset is not NULL for sub '$sub2_name' after reset.)
 );
 
 $reset_time1 = $node_subscriber->safe_psql($db,
-- 
2.43.0

v20251107-0002-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20251107-0002-Documentation-for-sequence-synchronization.patchDownload
From 215b7c744bf9cad9376124bbcae47f61974392e0 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:18:07 +0530
Subject: [PATCH v20251107 2/2] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  14 +-
 doc/src/sgml/func/func-sequence.sgml      |  24 +++
 doc/src/sgml/logical-replication.sgml     | 235 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  31 ++-
 7 files changed, 292 insertions(+), 34 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index d8a9f14b618..cf073f28f84 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5191,9 +5191,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, or table/sequence synchronization
+        worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5334,8 +5334,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, and  table/sequence
+        synchronization workers.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5359,9 +5359,11 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
         during the subscription initialization or when new tables are added.
+        One additional worker is also needed for sequence synchronization.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..80e51e9e365 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,30 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index daab2cae989..d7bf465f948 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, sequences can be
+   synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,205 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   on the subscriber:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences currently known to the subscription.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    The sequence synchronization worker validates that sequence definitions
+    match between publisher and subscriber. If mismatches exist, the worker
+    logs an error identifying them and exits. The apply worker continues
+    respawning the sequence synchronization worker until synchronization
+    succeeds. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To detect this, compare the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsublsn</structfield>
+    on the subscriber with the page_lsn obtained from the
+    <function>pg_get_sequence_data</function> for the sequence on the publisher.
+    Then run <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link> to
+    resynchronize if necessary.
+   </para>
+   <warning>
+    <para>
+     Each sequence caches a block of values (typically 32) in memory before
+     generating a new WAL record, so its LSN advances only after the entire
+     cached batch has been consumed. As a result, sequence value drift cannot be
+     detected by comparing LSNs for sequence increments that fall within the
+     same cached block.
+    </para>
+   </warning>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+/* pub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+/* pub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+/* sub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+/* sub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side a few times.
+<programlisting>
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+/* pub # */ CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+/* sub # */ CREATE SUBSCRIPTION sub1
+/* sub - */ CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+/* sub - */ PUBLICATION pub1;
+</programlisting></para>
+
+   <para>
+    Observe that initial sequence values are synchronized.
+<programlisting>
+/* sub # */ SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+/* sub # */ SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Update the sequences at the publisher side.
+<programlisting>
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      12
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     120
+(1 row)
+</programlisting></para>
+
+   <para>
+    Re-synchronize all sequences known to the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+/* sub # */ ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+
+/* sub # */ SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         12 |      30 | t
+(1 row)
+
+/* sub # */ SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        120 |      30 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2291,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2423,8 +2627,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, and
+    table/sequence synchronization workers.
    </para>
 
    <para>
@@ -2437,8 +2641,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 2741c138593..7b9fa20df9e 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..197be0c6f6b 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -127,10 +127,10 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
 
          <para>
           Since no connection is made when this option is
-          <literal>false</literal>, no tables are subscribed. To initiate
-          replication, you must manually create the replication slot, enable
-          the failover if required, enable the subscription, and refresh the
-          subscription. See
+          <literal>false</literal>, no tables and sequences are subscribed. To
+          initiate replication, you must manually create the replication slot,
+          enable the failover if required, enable the subscription, and refresh
+          the subscription. See
           <xref linkend="logical-replication-subscription-examples-deferred-slot"/>
           for examples.
          </para>
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter has no effect for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter has no effect for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter has no effect
+          for sequences.
          </para>
 
          <para>
@@ -398,8 +407,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
          <para>
           If true, all replication actions are performed as the subscription
           owner. If false, replication workers will perform actions on each
-          table as the owner of that table. The latter configuration is
-          generally much more secure; for details, see
+          table or sequence as the owner of that relation. The latter
+          configuration is generally much more secure; for details, see
           <xref linkend="logical-replication-security" />.
           The default is <literal>false</literal>.
          </para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter has no effect for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

#476shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#475)
Re: Logical Replication of sequences

On Fri, Nov 7, 2025 at 10:58 AM vignesh C <vignesh21@gmail.com> wrote:

Thanks for pushing the patch, here is a rebased version of the
remaining patches.

Please find a few comments on doc patch:

1)
+    them. To verify this, compare the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsublsn</structfield>
+    on the subscriber with the page_lsn obtained from the
+    <function>pg_get_sequence_data</function> for the sequence on the
publisher.

Is there a way to give link of 'pg_get_sequence_data' here?

2)
+   <warning>
+    <para>
+     Each sequence caches a block of values (typically 32) in memory before
+     generating a new WAL record, so its LSN advances only after the entire
+     cached batch has been consumed. As a result, sequence value
drift cannot be
+     detected by comparing LSNs for sequence increments that fall within the
+     same cached block.
+    </para>
+   </warning>

In such a case, shall we mention that compare last_value to see the
drift? Thoughts?

3)

+    To detect this, compare the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsublsn</structfield>
+    on the subscriber with the page_lsn obtained from the
+    <function>pg_get_sequence_data</function> for the sequence on the
publisher.

We have mentioned above. But in the example of the same, we do not
show srsublsn or page_lsn anywhere. Shall we query and show that as
well?

4)
Maximum number of synchronization workers per subscription. This
parameter controls the amount of parallelism of the initial data copy
during the subscription initialization or when new tables are added.
+ One additional worker is also needed for sequence synchronization.
</para>

Since now the first line is talking only about table-sync, shall we tweak it:
'of the initial data copy' --> 'of the initial data copy for tables'

5)
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,

last_value can also be set by seq synchronization. Do you think that
we need to mention that or current info is good enough?

thanks
Shveta

#477Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#475)
Re: Logical Replication of sequences

On Fri, Nov 7, 2025 at 10:58 AM vignesh C <vignesh21@gmail.com> wrote:

Thanks for pushing the patch, here is a rebased version of the
remaining patches.

Pushed after reverting change related to converting fixed offset to a
counter based scheme for view columns. We use the same method (fixed
column numbers) in most other exposed functions, so kept the same for
the sake of consistency.

--
With Regards,
Amit Kapila.

#478vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#476)
1 attachment(s)
Re: Logical Replication of sequences

On Fri, 7 Nov 2025 at 14:54, shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Nov 7, 2025 at 10:58 AM vignesh C <vignesh21@gmail.com> wrote:

Thanks for pushing the patch, here is a rebased version of the
remaining patches.

Please find a few comments on doc patch:

1)
+    them. To verify this, compare the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsublsn</structfield>
+    on the subscriber with the page_lsn obtained from the
+    <function>pg_get_sequence_data</function> for the sequence on the
publisher.

Is there a way to give link of 'pg_get_sequence_data' here?

Modified

2)
+   <warning>
+    <para>
+     Each sequence caches a block of values (typically 32) in memory before
+     generating a new WAL record, so its LSN advances only after the entire
+     cached batch has been consumed. As a result, sequence value
drift cannot be
+     detected by comparing LSNs for sequence increments that fall within the
+     same cached block.
+    </para>
+   </warning>

In such a case, shall we mention that compare last_value to see the
drift? Thoughts?

I was not sure as it might not be very efficient

3)

+    To detect this, compare the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsublsn</structfield>
+    on the subscriber with the page_lsn obtained from the
+    <function>pg_get_sequence_data</function> for the sequence on the
publisher.

We have mentioned above. But in the example of the same, we do not
show srsublsn or page_lsn anywhere. Shall we query and show that as
well?

Updated example

4)
Maximum number of synchronization workers per subscription. This
parameter controls the amount of parallelism of the initial data copy
during the subscription initialization or when new tables are added.
+ One additional worker is also needed for sequence synchronization.
</para>

Since now the first line is talking only about table-sync, shall we tweak it:
'of the initial data copy' --> 'of the initial data copy for tables'

Modified

5)
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,

last_value can also be set by seq synchronization. Do you think that
we need to mention that or current info is good enough?

Updated

The attached v20251107_2 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20251107_2-0001-Documentation-for-sequence-synchronizati.patchapplication/octet-stream; name=v20251107_2-0001-Documentation-for-sequence-synchronizati.patchDownload
From 1cadf74fcf6d5c3a48cab281aa411ba81cc4972d Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:18:07 +0530
Subject: [PATCH v20251107_2] Documentation for sequence synchronization
 feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  17 +-
 doc/src/sgml/func/func-sequence.sgml      |  25 ++
 doc/src/sgml/logical-replication.sgml     | 307 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  31 ++-
 7 files changed, 354 insertions(+), 48 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 683f7c36f46..6490336a687 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5198,9 +5198,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker, or table/sequence synchronization
+        worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5341,8 +5341,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, and  table/sequence
+        synchronization workers.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5365,10 +5365,13 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        for tables during the subscription initialization or when new tables
+        are added. One additional worker is also needed for sequence
+        synchronization.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..3169d1def70 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -143,6 +143,31 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry" id="func-pg-get-sequence-data"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval, setval, or
+        during logical replication sequence synchronization,
+        <literal>is_called</literal> indicates whether the sequence has been
+        used, and <literal>page_lsn</literal> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index daab2cae989..52fde387f60 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, sequences can be
+   synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -1745,6 +1747,245 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   on the subscriber:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences currently known to the subscription.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A new <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    The sequence synchronization worker validates that sequence definitions
+    match between publisher and subscriber. If mismatches exist, the worker
+    logs an error identifying them and exits. The apply worker continues
+    respawning the sequence synchronization worker until synchronization
+    succeeds. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Stale Sequences</title>
+   <para>
+    Subscriber side sequence values may frequently become out of sync due to
+    updates on the publisher.
+   </para>
+   <para>
+    To detect this, compare the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsublsn</structfield>
+    on the subscriber with the page_lsn obtained from the
+    <link linkend="func-pg-get-sequence-data">pg_get_sequence_data</link> for the sequence on the publisher.
+    Then run <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link> to
+    resynchronize if necessary.
+   </para>
+   <warning>
+    <para>
+     Each sequence caches a block of values (typically 32) in memory before
+     generating a new WAL record, so its LSN advances only after the entire
+     cached batch has been consumed. As a result, sequence value drift cannot be
+     detected by comparing LSNs for sequence increments that fall within the
+     same cached block.
+    </para>
+   </warning>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+/* pub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+/* pub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+/* sub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+/* sub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Advance the sequences on the publisher a few times.
+<programlisting>
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Check the sequence page LSNs on the publisher.
+<programlisting>
+/* pub # */ SELECT * FROM pg_get_sequence_data('s1');
+ last_value | is_called |  page_lsn
+------------+-----------+------------
+         11 | t         | 0/0178F9E0
+(1 row)
+/* pub # */ SELECT * FROM pg_get_sequence_data('s2');
+ last_value | is_called |  page_lsn
+------------+-----------+------------
+        110 | t         | 0/0178FAB0
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+/* pub # */ CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+/* sub # */ CREATE SUBSCRIPTION sub1
+/* sub - */ CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+/* sub - */ PUBLICATION pub1;
+</programlisting></para>
+
+   <para>
+    Verify that the initial sequence values are synchronized.
+<programlisting>
+/* sub # */ SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         11 |      31 | t
+(1 row)
+
+/* sub # */ SELECT * FROM s2;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        110 |      31 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Confirm that the publisher's page LSN has been recorded on the subscriber.
+<programlisting>
+/* sub # */ SELECT srrelid::regclass, srsublsn FROM pg_subscription_rel ;
+ srrelid |  srsublsn
+---------+------------
+ s1      | 0/0178F9E0
+ s2      | 0/0178FAB0
+(2 rows)
+</programlisting></para>
+
+   <para>
+    Advance the sequences on the publisher 50 more times.
+<programlisting>
+/* pub # */  SELECT nextval('s1') FROM generate_series(1,50);
+/* pub # */  SELECT nextval('s1') FROM generate_series(1,50);
+</programlisting></para>
+
+   <para>
+    Check the updated page LSNs on the publisher:
+<programlisting>
+/* pub # */ SELECT * FROM pg_get_sequence_data('s1');
+ last_value | is_called |  page_lsn
+------------+-----------+------------
+         61 | t         | 0/017CED28
+(1 row)
+
+/* pub # */ SELECT * FROM pg_get_sequence_data('s2');
+ last_value | is_called |  page_lsn
+------------+-----------+------------
+        610 | t         | 0/017CEDF8
+(1 row)
+</programlisting></para>
+
+   <para>
+    The difference between the publisher's page LSN and the subscriber's stored
+    LSN indicates that the sequences are out of sync. Re-synchronize all
+    sequences known to the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+/* sub # */ ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+</programlisting></para>
+
+   <para>
+    Recheck the sequences on the subscriber:
+<programlisting>
+/* sub # */ SELECT * FROM s1;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+         61 |       0 | t
+(1 row)
+
+/* sub # */ SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        610 |       0 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2331,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2290,9 +2534,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
   </para>
 
   <para>
-   In order to be able to copy the initial table data, the role used for the
-   replication connection must have the <literal>SELECT</literal> privilege on
-   a published table (or be a superuser).
+   In order to be able to copy the initial table or sequence data, the role
+   used for the replication connection must have the <literal>SELECT</literal>
+   privilege on a published table or sequence (or be a superuser).
   </para>
 
   <para>
@@ -2303,8 +2547,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
   <para>
    To add tables to a publication, the user must have ownership rights on the
    table. To add all tables in schema to a publication, the user must be a
-   superuser. To create a publication that publishes all tables or all tables in
-   schema automatically, the user must be a superuser.
+   superuser. To create a publication that publishes all tables, all tables in
+   schema, or all sequences automatically, the user must be a superuser.
   </para>
 
   <para>
@@ -2329,8 +2573,11 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    privileges of the subscription owner. However, when performing an insert,
    update, delete, or truncate operation on a particular table, it will switch
    roles to the table owner and perform the operation with the table owner's
-   privileges. This means that the subscription owner needs to be able to
-   <literal>SET ROLE</literal> to each role that owns a replicated table.
+   privileges. Similarly, when synchronizing sequence data, it will switch to
+   the sequence owner's role and perform the operation using the sequence
+   owner's privileges. This means that the subscription owner needs to be able
+   to <literal>SET ROLE</literal> to each role that owns a replicated table or
+   sequence.
   </para>
 
   <para>
@@ -2341,12 +2588,15 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    needs privileges to <literal>SELECT</literal>, <literal>INSERT</literal>,
    <literal>UPDATE</literal>, and <literal>DELETE</literal> from the
    target table, and does not need privileges to <literal>SET ROLE</literal>
-   to the table owner. However, this also means that any user who owns
-   a table into which replication is happening can execute arbitrary code with
-   the privileges of the subscription owner. For example, they could do this
-   by simply attaching a trigger to one of the tables which they own.
-   Because it is usually undesirable to allow one role to freely assume
-   the privileges of another, this option should be avoided unless user
+   to the table owner. Similarly, when synchronizing sequence data, the
+   subscription owner only needs privileges to <literal>UPDATE</literal> the
+   target sequence, and  does not need privileges to <literal>SET ROLE</literal>
+   to the sequence owner. However, this also means that any user who owns
+   a table or sequence into which replication is happening can execute
+   arbitrary code with the privileges of the subscription owner. For example,
+   they could do this by simply attaching a trigger to one of the tables which
+   they own. Because it is usually undesirable to allow one role to freely
+   assume the privileges of another, this option should be avoided unless user
    security within the database is of no concern.
   </para>
 
@@ -2423,8 +2673,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, and
+    table/sequence synchronization workers.
    </para>
 
    <para>
@@ -2437,8 +2687,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 2741c138593..7b9fa20df9e 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..197be0c6f6b 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -127,10 +127,10 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
 
          <para>
           Since no connection is made when this option is
-          <literal>false</literal>, no tables are subscribed. To initiate
-          replication, you must manually create the replication slot, enable
-          the failover if required, enable the subscription, and refresh the
-          subscription. See
+          <literal>false</literal>, no tables and sequences are subscribed. To
+          initiate replication, you must manually create the replication slot,
+          enable the failover if required, enable the subscription, and refresh
+          the subscription. See
           <xref linkend="logical-replication-subscription-examples-deferred-slot"/>
           for examples.
          </para>
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter has no effect for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter has no effect for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter has no effect
+          for sequences.
          </para>
 
          <para>
@@ -398,8 +407,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
          <para>
           If true, all replication actions are performed as the subscription
           owner. If false, replication workers will perform actions on each
-          table as the owner of that table. The latter configuration is
-          generally much more secure; for details, see
+          table or sequence as the owner of that relation. The latter
+          configuration is generally much more secure; for details, see
           <xref linkend="logical-replication-security" />.
           The default is <literal>false</literal>.
          </para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter has no effect for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

#479Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: vignesh C (#478)
1 attachment(s)
Re: Logical Replication of sequences

On Fri, 7 Nov 2025 at 20:18, vignesh C <vignesh21@gmail.com> wrote:

On Fri, 7 Nov 2025 at 14:54, shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Nov 7, 2025 at 10:58 AM vignesh C <vignesh21@gmail.com> wrote:

Thanks for pushing the patch, here is a rebased version of the
remaining patches.

Please find a few comments on doc patch:

1)
+    them. To verify this, compare the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsublsn</structfield>
+    on the subscriber with the page_lsn obtained from the
+    <function>pg_get_sequence_data</function> for the sequence on the
publisher.

Is there a way to give link of 'pg_get_sequence_data' here?

Modified

2)
+   <warning>
+    <para>
+     Each sequence caches a block of values (typically 32) in memory before
+     generating a new WAL record, so its LSN advances only after the entire
+     cached batch has been consumed. As a result, sequence value
drift cannot be
+     detected by comparing LSNs for sequence increments that fall within the
+     same cached block.
+    </para>
+   </warning>

In such a case, shall we mention that compare last_value to see the
drift? Thoughts?

I was not sure as it might not be very efficient

3)

+    To detect this, compare the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsublsn</structfield>
+    on the subscriber with the page_lsn obtained from the
+    <function>pg_get_sequence_data</function> for the sequence on the
publisher.

We have mentioned above. But in the example of the same, we do not
show srsublsn or page_lsn anywhere. Shall we query and show that as
well?

Updated example

4)
Maximum number of synchronization workers per subscription. This
parameter controls the amount of parallelism of the initial data copy
during the subscription initialization or when new tables are added.
+ One additional worker is also needed for sequence synchronization.
</para>

Since now the first line is talking only about table-sync, shall we tweak it:
'of the initial data copy' --> 'of the initial data copy for tables'

Modified

5)
+        Returns information about the sequence. <literal>last_value</literal>
+        indicates last sequence value set in sequence by nextval or setval,

last_value can also be set by seq synchronization. Do you think that
we need to mention that or current info is good enough?

Updated

The attached v20251107_2 version patch has the changes for the same.

Hi Vignesh,

While working on another thread, I found that in HEAD gram.y has
grammar which was committed as part of this thread:
```
| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);

n->pubname = $3;
n->pubobjects = (List *) $5;
preprocess_pub_all_objtype_list($5, &n->for_all_tables,
&n->for_all_sequences,
yyscanner);
n->options = $6;
$$ = (Node *) n;
}
```

Here we are assigning "n->pubobjects = (List *) $5". But later in the
code this is not used anywhere for ALL TABLES/ ALL SEQUENCES
publication. It is used for other publications (not ALL TABLES/
SEQUENCES) inside function "ObjectsInPublicationToOids"

So are we required to assign "n->pubobjects" here?

I have created a patch to remove this assignment. It passed "make check-world".

Thanks,
Shlok Kyal

Attachments:

v1-0001-Remove-unused-n-pubobjects-assignment-for-CREATE-.patchapplication/octet-stream; name=v1-0001-Remove-unused-n-pubobjects-assignment-for-CREATE-.patchDownload
From ca4357fdd125b5135e0d9da887aabf2c062e5220 Mon Sep 17 00:00:00 2001
From: Shlok Kyal <shlok.kyal.oss@gmail.com>
Date: Mon, 10 Nov 2025 13:48:28 +0530
Subject: [PATCH v1] Remove unused n->pubobjects assignment for CREATE
 PUBLICATION .. ALL TABLES/SEQUNECES

For grammar: CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
"n->pubobjects" is assigned but later in the code it is not used. It is
used in function "ObjectsInPublicationToOids" and donot require it for
ALL TABLES or ALL SEQUENCES publication.
This patch removes the assignment of "n->pubobjects" for above grammar.
---
 src/backend/parser/gram.y | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 57fe0186547..dfe757b8f5e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10761,7 +10761,6 @@ CreatePublicationStmt:
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->pubobjects = (List *) $5;
 					preprocess_pub_all_objtype_list($5, &n->for_all_tables,
 													&n->for_all_sequences,
 													yyscanner);
-- 
2.34.1

#480vignesh C
vignesh21@gmail.com
In reply to: Shlok Kyal (#479)
Re: Logical Replication of sequences

On Mon, 10 Nov 2025 at 14:34, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

While working on another thread, I found that in HEAD gram.y has
grammar which was committed as part of this thread:
```
| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);

n->pubname = $3;
n->pubobjects = (List *) $5;
preprocess_pub_all_objtype_list($5, &n->for_all_tables,
&n->for_all_sequences,
yyscanner);
n->options = $6;
$$ = (Node *) n;
}
```

Here we are assigning "n->pubobjects = (List *) $5". But later in the
code this is not used anywhere for ALL TABLES/ ALL SEQUENCES
publication. It is used for other publications (not ALL TABLES/
SEQUENCES) inside function "ObjectsInPublicationToOids"

So are we required to assign "n->pubobjects" here?

I have created a patch to remove this assignment. It passed "make check-world".

I agree that this is not required for ALL TABLES/ALL SEQUENCES cases.
Your changes look good to me.

Regards,
Vignesh

#481Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#480)
Re: Logical Replication of sequences

On Mon, Nov 10, 2025 at 4:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Nov 2025 at 14:34, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

While working on another thread, I found that in HEAD gram.y has
grammar which was committed as part of this thread:
```
| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);

n->pubname = $3;
n->pubobjects = (List *) $5;
preprocess_pub_all_objtype_list($5, &n->for_all_tables,
&n->for_all_sequences,
yyscanner);
n->options = $6;
$$ = (Node *) n;
}
```

Here we are assigning "n->pubobjects = (List *) $5". But later in the
code this is not used anywhere for ALL TABLES/ ALL SEQUENCES
publication. It is used for other publications (not ALL TABLES/
SEQUENCES) inside function "ObjectsInPublicationToOids"

So are we required to assign "n->pubobjects" here?

I have created a patch to remove this assignment. It passed "make check-world".

I agree that this is not required for ALL TABLES/ALL SEQUENCES cases.

I also agree. BTW, can we change pub_obj_type_list to
pub_all_obj_type_list to be more explicit about this variant? If you
agree then we can make similar changes
(pub_obj_type->pub_all_obj_type) at the following places as well:

+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *

--
With Regards,
Amit Kapila.

#482Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: Amit Kapila (#481)
1 attachment(s)
Re: Logical Replication of sequences

On Tue, 11 Nov 2025 at 09:02, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Nov 10, 2025 at 4:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Nov 2025 at 14:34, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

While working on another thread, I found that in HEAD gram.y has
grammar which was committed as part of this thread:
```
| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);

n->pubname = $3;
n->pubobjects = (List *) $5;
preprocess_pub_all_objtype_list($5, &n->for_all_tables,
&n->for_all_sequences,
yyscanner);
n->options = $6;
$$ = (Node *) n;
}
```

Here we are assigning "n->pubobjects = (List *) $5". But later in the
code this is not used anywhere for ALL TABLES/ ALL SEQUENCES
publication. It is used for other publications (not ALL TABLES/
SEQUENCES) inside function "ObjectsInPublicationToOids"

So are we required to assign "n->pubobjects" here?

I have created a patch to remove this assignment. It passed "make check-world".

I agree that this is not required for ALL TABLES/ALL SEQUENCES cases.

I also agree. BTW, can we change pub_obj_type_list to
pub_all_obj_type_list to be more explicit about this variant? If you
agree then we can make similar changes
(pub_obj_type->pub_all_obj_type) at the following places as well:

+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *

Hi Vignesh, Amit

Thanks for confirming.
Also, I agree that the name 'pub_all_obj_type' will be more suitable.
I have updated the patch for the same.

Thanks,
Shlok Kyal

Attachments:

v2-0001-Remove-unused-n-pubobjects-assignment-for-CREATE-.patchapplication/octet-stream; name=v2-0001-Remove-unused-n-pubobjects-assignment-for-CREATE-.patchDownload
From 16f5557fb08d0ce5f41d509ef3fb475a10eedb75 Mon Sep 17 00:00:00 2001
From: Shlok Kyal <shlok.kyal.oss@gmail.com>
Date: Mon, 10 Nov 2025 13:48:28 +0530
Subject: [PATCH v2] Remove unused n->pubobjects assignment for CREATE
 PUBLICATION .. ALL TABLES/SEQUENCES

For grammar: CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
"n->pubobjects" is assigned but later in the code it is not used. It is
used in function "ObjectsInPublicationToOids" and we donot require it for
ALL TABLES or ALL SEQUENCES publication.
This patch removes the assignment of "n->pubobjects" for above grammar
and also renames 'pub_obj_type' to 'pub_all_obj_type' for better
clarity.
---
 src/backend/parser/gram.y | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 57fe0186547..c3a0a354a9c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -453,7 +453,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
 				vacuum_relation_list opt_vacuum_relation_list
-				drop_option_list pub_obj_list pub_obj_type_list
+				drop_option_list pub_obj_list pub_all_obj_type_list
 
 %type <retclause> returning_clause
 %type <node>	returning_option
@@ -10731,9 +10731,9 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
  *
  * CREATE PUBLICATION name [WITH options]
  *
- * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ * CREATE PUBLICATION FOR ALL pub_all_obj_type [, ...] [WITH options]
  *
- * pub_obj_type is one of:
+ * pub_all_obj_type is one of:
  *
  *		TABLES
  *		SEQUENCES
@@ -10756,12 +10756,11 @@ CreatePublicationStmt:
 					n->options = $4;
 					$$ = (Node *) n;
 				}
-			| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
+			| CREATE PUBLICATION name FOR pub_all_obj_type_list opt_definition
 				{
 					CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
 
 					n->pubname = $3;
-					n->pubobjects = (List *) $5;
 					preprocess_pub_all_objtype_list($5, &n->for_all_tables,
 													&n->for_all_sequences,
 													yyscanner);
@@ -10892,9 +10891,9 @@ PublicationAllObjSpec:
 					}
 					;
 
-pub_obj_type_list:	PublicationAllObjSpec
+pub_all_obj_type_list:	PublicationAllObjSpec
 					{ $$ = list_make1($1); }
-				| pub_obj_type_list ',' PublicationAllObjSpec
+				| pub_all_obj_type_list ',' PublicationAllObjSpec
 					{ $$ = lappend($1, $3); }
 	;
 
-- 
2.34.1

#483vignesh C
vignesh21@gmail.com
In reply to: Shlok Kyal (#482)
Re: Logical Replication of sequences

On Tue, 11 Nov 2025 at 09:59, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

On Tue, 11 Nov 2025 at 09:02, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Nov 10, 2025 at 4:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, 10 Nov 2025 at 14:34, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

While working on another thread, I found that in HEAD gram.y has
grammar which was committed as part of this thread:
```
| CREATE PUBLICATION name FOR pub_obj_type_list opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);

n->pubname = $3;
n->pubobjects = (List *) $5;
preprocess_pub_all_objtype_list($5, &n->for_all_tables,
&n->for_all_sequences,
yyscanner);
n->options = $6;
$$ = (Node *) n;
}
```

Here we are assigning "n->pubobjects = (List *) $5". But later in the
code this is not used anywhere for ALL TABLES/ ALL SEQUENCES
publication. It is used for other publications (not ALL TABLES/
SEQUENCES) inside function "ObjectsInPublicationToOids"

So are we required to assign "n->pubobjects" here?

I have created a patch to remove this assignment. It passed "make check-world".

I agree that this is not required for ALL TABLES/ALL SEQUENCES cases.

I also agree. BTW, can we change pub_obj_type_list to
pub_all_obj_type_list to be more explicit about this variant? If you
agree then we can make similar changes
(pub_obj_type->pub_all_obj_type) at the following places as well:

+ * CREATE PUBLICATION FOR ALL pub_obj_type [, ...] [WITH options]
+ *
+ * pub_obj_type is one of:
+ *

Hi Vignesh, Amit

Thanks for confirming.
Also, I agree that the name 'pub_all_obj_type' will be more suitable.
I have updated the patch for the same.

Thanks, this version looks good to me, I don't have any comments.

Regards,
Vignesh

#484Chao Li
li.evan.chao@gmail.com
In reply to: vignesh C (#478)
Re: Logical Replication of sequences

Hi Vignesh,

A few more comments:

On Nov 7, 2025, at 22:47, vignesh C <vignesh21@gmail.com> wrote:

The attached v20251107_2 version patch has the changes for the same.

Regards,
Vignesh
<v20251107_2-0001-Documentation-for-sequence-synchronizati.patch>

1
```
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
```

Feels like this statement is not accurate and leaves an impression that a table has a fixed work to serve it. So I think we can enhance this statement a little bit as:

```
Currently, only one table synchronization worker runs per table, and
only one sequence synchronization worker runs per subscription at a time.
```

2
```
+<programlisting>
+/* sub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+/* sub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
```

Missed semi-colon for the first SQL statement.

3
```
+<programlisting>
+/* sub # */ SELECT srrelid::regclass, srsublsn FROM pg_subscription_rel ;
```

Unneeded white-space before the semi-colon.

4
```
+/* sub # */ SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        610 |       0 | t
+(1 row)
```

Again, missed semi-colon.

Best regards,
--
Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/

#485vignesh C
vignesh21@gmail.com
In reply to: Chao Li (#484)
1 attachment(s)
Re: Logical Replication of sequences

On Tue, 11 Nov 2025 at 11:23, Chao Li <li.evan.chao@gmail.com> wrote:

Hi Vignesh,

A few more comments:

On Nov 7, 2025, at 22:47, vignesh C <vignesh21@gmail.com> wrote:

The attached v20251107_2 version patch has the changes for the same.

Regards,
Vignesh
<v20251107_2-0001-Documentation-for-sequence-synchronizati.patch>

1
```
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
```

Feels like this statement is not accurate and leaves an impression that a table has a fixed work to serve it. So I think we can enhance this statement a little bit as:

```
Currently, only one table synchronization worker runs per table, and
only one sequence synchronization worker runs per subscription at a time.
```

Changed it slightly.

2
```
+<programlisting>
+/* sub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+/* sub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
```

Missed semi-colon for the first SQL statement.

Modified

3
```
+<programlisting>
+/* sub # */ SELECT srrelid::regclass, srsublsn FROM pg_subscription_rel ;
```

Unneeded white-space before the semi-colon.

Modified

4
```
+/* sub # */ SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        610 |       0 | t
+(1 row)
```

Again, missed semi-colon.

Modified

The attached v20251111 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v20251111-0001-Documentation-for-sequence-synchronization.patchtext/x-patch; charset=US-ASCII; name=v20251111-0001-Documentation-for-sequence-synchronization.patchDownload
From af6fe4f540599183c2d98292243ebcecb3562bc2 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Mon, 27 Oct 2025 09:18:07 +0530
Subject: [PATCH v20251111] Documentation for sequence synchronization feature.

Documentation for sequence synchronization feature.

Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.co
---
 doc/src/sgml/catalogs.sgml                |   2 +-
 doc/src/sgml/config.sgml                  |  18 +-
 doc/src/sgml/func/func-sequence.sgml      |  39 ++-
 doc/src/sgml/logical-replication.sgml     | 296 ++++++++++++++++++++--
 doc/src/sgml/monitoring.sgml              |   5 +-
 doc/src/sgml/ref/alter_subscription.sgml  |  15 ++
 doc/src/sgml/ref/create_subscription.sgml |  31 ++-
 7 files changed, 357 insertions(+), 49 deletions(-)

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 6c8a0f173c9..2fc63442980 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6568,7 +6568,7 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
        (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>oid</structfield>)
       </para>
       <para>
-       Reference to relation
+       Reference to table or sequence
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 683f7c36f46..64e55d1a853 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5198,9 +5198,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         is taken into account.
        </para>
        <para>
-        In logical replication, this parameter also limits how often a failing
-        replication apply worker or table synchronization worker will be
-        respawned.
+        In logical replication, this parameter also limits how quickly a
+        failing replication apply worker or table/sequence synchronization
+        worker will be respawned.
        </para>
       </listitem>
      </varlistentry>
@@ -5341,8 +5341,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
       <listitem>
        <para>
         Specifies maximum number of logical replication workers. This includes
-        leader apply workers, parallel apply workers, and table synchronization
-        workers.
+        leader apply workers, parallel apply workers, and table/sequence
+        synchronization workers.
        </para>
        <para>
         Logical replication workers are taken from the pool defined by
@@ -5365,10 +5365,14 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
        <para>
         Maximum number of synchronization workers per subscription. This
         parameter controls the amount of parallelism of the initial data copy
-        during the subscription initialization or when new tables are added.
+        for tables during the subscription initialization or when new tables
+        are added. One additional worker is also needed for sequence
+        synchronization.
        </para>
        <para>
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize per
+        subscription.
        </para>
        <para>
         The synchronization workers are taken from the pool defined by
diff --git a/doc/src/sgml/func/func-sequence.sgml b/doc/src/sgml/func/func-sequence.sgml
index e9f5b4e8e6b..25fd1d14a70 100644
--- a/doc/src/sgml/func/func-sequence.sgml
+++ b/doc/src/sgml/func/func-sequence.sgml
@@ -67,15 +67,15 @@
        </para>
        <para>
         Sets the sequence object's current value, and optionally
-        its <literal>is_called</literal> flag.  The two-parameter
-        form sets the sequence's <literal>last_value</literal> field to the
-        specified value and sets its <literal>is_called</literal> field to
-        <literal>true</literal>, meaning that the next
+        its <structfield>is_called</structfield> flag.  The two-parameter
+        form sets the sequence's <structfield>last_value</structfield> field to
+        the specified value and sets its <structfield>is_called</structfield>
+        field to <literal>true</literal>, meaning that the next
         <function>nextval</function> will advance the sequence before
         returning a value.  The value that will be reported
         by <function>currval</function> is also set to the specified value.
-        In the three-parameter form, <literal>is_called</literal> can be set
-        to either <literal>true</literal>
+        In the three-parameter form, <structfield>is_called</structfield> can
+        be set to either <literal>true</literal>
         or <literal>false</literal>.  <literal>true</literal> has the same
         effect as the two-parameter form. If it is set
         to <literal>false</literal>, the next <function>nextval</function>
@@ -143,6 +143,33 @@ SELECT setval('myseq', 42, false);    <lineannotation>Next <function>nextval</fu
         or <literal>SELECT</literal> privilege on the last used sequence.
        </para></entry>
       </row>
+
+      <row>
+       <entry role="func_table_entry" id="func-pg-get-sequence-data"><para role="func_signature">
+        <indexterm>
+         <primary>pg_get_sequence_data</primary>
+        </indexterm>
+        <function>pg_get_sequence_data</function> ( <type>regclass</type> )
+        <returnvalue>record</returnvalue>
+        ( <parameter>last_value</parameter> <type>bigint</type>,
+        <parameter>is_called</parameter> <type>bool</type>,
+         <parameter>page_lsn</parameter> <type>pg_lsn</type> )
+       </para>
+       <para>
+        Returns information about the sequence.
+        <structfield>last_value</structfield> is primarily intended for
+        internal use by pg_dump and by logical replication to synchronize
+        sequences. <structfield>is_called</structfield> indicates whether the
+        sequence has been used. Logical replication's sequence synchronization
+        updates the subscriber's sequence values to match the publisher's.
+        <structfield>page_lsn</structfield> is the LSN corresponding to the
+        most recent WAL record that modified this sequence relation.
+       </para>
+       <para>
+        This function requires <literal>USAGE</literal>
+        or <literal>SELECT</literal> privilege on the sequence.
+       </para></entry>
+      </row>
      </tbody>
     </tgroup>
    </table>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index d64ed9dc36b..0e8cbcf03f8 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -113,7 +113,9 @@
    Publications may currently only contain tables or sequences. Objects must be
    added explicitly, except when a publication is created using
    <literal>FOR TABLES IN SCHEMA</literal>, <literal>FOR ALL TABLES</literal>,
-   or <literal>FOR ALL SEQUENCES</literal>.
+   or <literal>FOR ALL SEQUENCES</literal>. Unlike tables, sequences can be
+   synchronized at any time. For more information, see
+   <xref linkend="logical-replication-sequences"/>.
   </para>
 
   <para>
@@ -253,7 +255,7 @@
 
   <para>
    When a subscription is dropped and recreated, the synchronization
-   information is lost.  This means that the data has to be resynchronized
+   information is lost.  This means that the data has to be re-synchronized
    afterwards.
   </para>
 
@@ -1745,6 +1747,247 @@ Publications:
   </note>
  </sect1>
 
+ <sect1 id="logical-replication-sequences">
+  <title>Replicating Sequences</title>
+
+  <para>
+   To synchronize sequences from a publisher to a subscriber, first publish
+   them using <link linkend="sql-createpublication-params-for-all-sequences">
+   <command>CREATE PUBLICATION ... FOR ALL SEQUENCES</command></link> and then
+   on the subscriber:
+  </para>
+
+  <para>
+   <itemizedlist>
+    <listitem>
+     <para>
+      use <link linkend="sql-createsubscription"><command>CREATE SUBSCRIPTION</command></link>
+      to initially synchronize the published sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-publication">
+      <command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>
+      to synchronize only newly added sequences.
+     </para>
+    </listitem>
+    <listitem>
+     <para>
+      use <link linkend="sql-altersubscription-params-refresh-sequences">
+      <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+      to re-synchronize all sequences currently known to the subscription.
+     </para>
+    </listitem>
+   </itemizedlist>
+  </para>
+
+  <para>
+   A <firstterm>sequence synchronization worker</firstterm> will be started
+   after executing any of the above subscriber commands, and will exit once the
+   sequences are synchronized.
+  </para>
+  <para>
+   The ability to launch a sequence synchronization worker is limited by the
+   <link linkend="guc-max-sync-workers-per-subscription">
+   <varname>max_sync_workers_per_subscription</varname></link>
+   configuration.
+  </para>
+
+  <sect2 id="sequence-definition-mismatches">
+   <title>Sequence Definition Mismatches</title>
+   <para>
+    The sequence synchronization worker validates that sequence definitions
+    match between publisher and subscriber. If mismatches exist, the worker
+    logs an error identifying them and exits. The apply worker continues
+    respawning the sequence synchronization worker until synchronization
+    succeeds. See also
+    <link linkend="guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval</varname></link>.
+   </para>
+   <para>
+    To resolve this, use
+    <link linkend="sql-altersequence"><command>ALTER SEQUENCE</command></link>
+    to align the subscriber's sequence parameters with those of the publisher.
+   </para>
+  </sect2>
+
+  <sect2 id="sequences-out-of-sync">
+   <title>Refreshing Out-of-Sync Sequences</title>
+   <para>
+    Subscriber sequence values will become out of sync as the publisher
+    advances them.
+   </para>
+   <para>
+    To detect this, compare the
+    <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsublsn</structfield>
+    on the subscriber with the <structfield>page_lsn</structfield> obtained
+    from the <link linkend="func-pg-get-sequence-data"><function>pg_get_sequence_data</function></link>
+    function for the sequence on the publisher. Then run
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link> to
+    re-synchronize if necessary.
+   </para>
+   <warning>
+    <para>
+     Each sequence caches a block of values (typically 32) in memory before
+     generating a new WAL record, so its LSN advances only after the entire
+     cached batch has been consumed. As a result, sequence value drift cannot
+     be detected by LSN comparison when sequence increments fall within the
+     same cached block (typically 32 values).
+    </para>
+   </warning>
+  </sect2>
+
+  <sect2 id="logical-replication-sequences-examples">
+   <title>Examples</title>
+
+   <para>
+    Create some sequences on the publisher.
+<programlisting>
+/* pub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+/* pub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Create the same sequences on the subscriber.
+<programlisting>
+/* sub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1;
+/* sub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
+
+   <para>
+    Advance the sequences on the publisher a few times.
+<programlisting>
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      10
+(1 row)
+/* pub # */ SELECT nextval('s1');
+ nextval
+---------
+      11
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     100
+(1 row)
+/* pub # */ SELECT nextval('s2');
+ nextval
+---------
+     110
+(1 row)
+</programlisting></para>
+
+   <para>
+    Check the sequence page LSNs on the publisher.
+<programlisting>
+/* pub # */ SELECT * FROM pg_get_sequence_data('s1');
+ last_value | is_called |  page_lsn
+------------+-----------+------------
+         11 | t         | 0/0178F9E0
+(1 row)
+/* pub # */ SELECT * FROM pg_get_sequence_data('s2');
+ last_value | is_called |  page_lsn
+------------+-----------+------------
+        110 | t         | 0/0178FAB0
+(1 row)
+</programlisting></para>
+
+   <para>
+    Create a publication for the sequences.
+<programlisting>
+/* pub # */ CREATE PUBLICATION pub1 FOR ALL SEQUENCES;
+</programlisting></para>
+
+   <para>
+    Subscribe to the publication.
+<programlisting>
+/* sub # */ CREATE SUBSCRIPTION sub1
+/* sub - */ CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
+/* sub - */ PUBLICATION pub1;
+</programlisting></para>
+
+   <para>
+    Verify that the initial sequence values are synchronized.
+<programlisting>
+/* sub # */ SELECT last_value, is_called FROM s1;
+ last_value | is_called
+------------+-----------
+         11 | t
+(1 row)
+
+/* sub # */ SELECT last_value, is_called FROM s2;
+ last_value | is_called
+------------+-----------
+        110 | t
+(1 row)
+</programlisting></para>
+
+   <para>
+    Confirm that the sequence page LSNs on the publisher have been recorded
+    on the subscriber.
+<programlisting>
+/* sub # */ SELECT srrelid::regclass, srsublsn FROM pg_subscription_rel;
+ srrelid |  srsublsn
+---------+------------
+ s1      | 0/0178F9E0
+ s2      | 0/0178FAB0
+(2 rows)
+</programlisting></para>
+
+   <para>
+    Advance the sequences on the publisher 50 more times.
+<programlisting>
+/* pub # */  SELECT nextval('s1') FROM generate_series(1,50);
+/* pub # */  SELECT nextval('s2') FROM generate_series(1,50);
+</programlisting></para>
+
+   <para>
+    Check the sequence page LSNs on the publisher.
+<programlisting>
+/* pub # */ SELECT * FROM pg_get_sequence_data('s1');
+ last_value | is_called |  page_lsn
+------------+-----------+------------
+         61 | t         | 0/017CED28
+(1 row)
+
+/* pub # */ SELECT * FROM pg_get_sequence_data('s2');
+ last_value | is_called |  page_lsn
+------------+-----------+------------
+        610 | t         | 0/017CEDF8
+(1 row)
+</programlisting></para>
+
+   <para>
+    The difference between the sequence page LSNs on the publisher and the
+    sequence page LSNs on the subscriber indicates that the sequences are out
+    of sync. Re-synchronize all sequences known to the subscriber using
+    <link linkend="sql-altersubscription-params-refresh-sequences">
+    <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
+<programlisting>
+/* sub # */ ALTER SUBSCRIPTION sub1 REFRESH SEQUENCES;
+</programlisting></para>
+
+   <para>
+    Recheck the sequences on the subscriber.
+<programlisting>
+/* sub # */ SELECT last_value, is_called FROM s1;
+ last_value | is_called
+------------+-----------
+         61 | t
+(1 row)
+
+/* sub # */ SELECT last_value, is_called FROM s2;
+ last_value | is_called
+------------+-----------
+        610 | t
+(1 row)
+</programlisting></para>
+  </sect2>
+ </sect1>
+
  <sect1 id="logical-replication-conflicts">
   <title>Conflicts</title>
 
@@ -2090,16 +2333,19 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <listitem>
     <para>
-     Sequence data is not replicated.  The data in serial or identity columns
-     backed by sequences will of course be replicated as part of the table,
-     but the sequence itself would still show the start value on the
-     subscriber.  If the subscriber is used as a read-only database, then this
-     should typically not be a problem.  If, however, some kind of switchover
-     or failover to the subscriber database is intended, then the sequences
-     would need to be updated to the latest values, either by copying the
-     current data from the publisher (perhaps
-     using <command>pg_dump</command>) or by determining a sufficiently high
-     value from the tables themselves.
+     Incremental sequence changes are not replicated.  Although the data in
+     serial or identity columns backed by sequences will be replicated as part
+     of the table, the sequences themselves do not replicate ongoing changes.
+     On the subscriber, a sequence will retain the last value it synchronized
+     from the publisher. If the subscriber is used as a read-only database,
+     then this should typically not be a problem.  If, however, some kind of
+     switchover or failover to the subscriber database is intended, then the
+     sequences would need to be updated to the latest values, either by
+     executing <link linkend="sql-altersubscription-params-refresh-sequences">
+     <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>
+     or by copying the current data from the publisher (perhaps using
+     <command>pg_dump</command>) or by determining a sufficiently high value
+     from the tables themselves.
     </para>
    </listitem>
 
@@ -2290,9 +2536,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
   </para>
 
   <para>
-   In order to be able to copy the initial table data, the role used for the
-   replication connection must have the <literal>SELECT</literal> privilege on
-   a published table (or be a superuser).
+   In order to be able to copy the initial table or sequence data, the role
+   used for the replication connection must have the <literal>SELECT</literal>
+   privilege on a published table or sequence (or be a superuser).
   </para>
 
   <para>
@@ -2303,8 +2549,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
   <para>
    To add tables to a publication, the user must have ownership rights on the
    table. To add all tables in schema to a publication, the user must be a
-   superuser. To create a publication that publishes all tables or all tables in
-   schema automatically, the user must be a superuser.
+   superuser. To create a publication that publishes all tables, all tables in
+   schema, or all sequences automatically, the user must be a superuser.
   </para>
 
   <para>
@@ -2329,8 +2575,11 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    privileges of the subscription owner. However, when performing an insert,
    update, delete, or truncate operation on a particular table, it will switch
    roles to the table owner and perform the operation with the table owner's
-   privileges. This means that the subscription owner needs to be able to
-   <literal>SET ROLE</literal> to each role that owns a replicated table.
+   privileges. Similarly, when synchronizing sequence data, it will switch to
+   the sequence owner's role and perform the operation using the sequence
+   owner's privileges. This means that the subscription owner needs to be able
+   to <literal>SET ROLE</literal> to each role that owns a replicated table or
+   sequence.
   </para>
 
   <para>
@@ -2423,8 +2672,8 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
    <para>
     <link linkend="guc-max-logical-replication-workers"><varname>max_logical_replication_workers</varname></link>
     must be set to at least the number of subscriptions (for leader apply
-    workers), plus some reserve for the table synchronization workers and
-    parallel apply workers.
+    workers), plus some reserve for the parallel apply workers, and
+    table/sequence synchronization workers.
    </para>
 
    <para>
@@ -2437,8 +2686,9 @@ CONTEXT:  processing remote data for replication origin "pg_16395" during "INSER
 
    <para>
     <link linkend="guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription</varname></link>
-     controls the amount of parallelism of the initial data copy during the
-     subscription initialization or when new tables are added.
+     controls how many tables can be synchronized in parallel during
+     subscription initialization or when new tables are added. One additional
+     worker is also needed for sequence synchronization.
    </para>
 
    <para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 2741c138593..7b9fa20df9e 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -2045,8 +2045,9 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para>
       <para>
        Type of the subscription worker process.  Possible types are
-       <literal>apply</literal>, <literal>parallel apply</literal>, and
-       <literal>table synchronization</literal>.
+       <literal>apply</literal>, <literal>parallel apply</literal>,
+       <literal>table synchronization</literal>, and
+       <literal>sequence synchronization</literal>.
       </para></entry>
      </row>
 
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 8ab3b7fbd37..27c06439f4f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -195,6 +195,12 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
           use <link linkend="sql-altersubscription-params-refresh-sequences">
           <command>ALTER SUBSCRIPTION ... REFRESH SEQUENCES</command></link>.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/> for recommendations on how
+          to handle any warnings about sequence definition differences between
+          the publisher and the subscriber, which might occur when
+          <literal>copy_data = true</literal>.
+         </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of
           how <literal>copy_data = true</literal> can interact with the
@@ -225,6 +231,15 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
       data for all currently subscribed sequences. It does not add or remove
       sequences from the subscription to match the publication.
      </para>
+     <para>
+      See <xref linkend="sequence-definition-mismatches"/> for
+      recommendations on how to handle any warnings about sequence definition
+      differences between the publisher and the subscriber.
+     </para>
+     <para>
+      See <xref linkend="sequences-out-of-sync"/> for recommendations on how to
+      identify and handle out-of-sync sequences.
+     </para>
     </listitem>
    </varlistentry>
 
diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml
index ed82cf1809e..197be0c6f6b 100644
--- a/doc/src/sgml/ref/create_subscription.sgml
+++ b/doc/src/sgml/ref/create_subscription.sgml
@@ -127,10 +127,10 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
 
          <para>
           Since no connection is made when this option is
-          <literal>false</literal>, no tables are subscribed. To initiate
-          replication, you must manually create the replication slot, enable
-          the failover if required, enable the subscription, and refresh the
-          subscription. See
+          <literal>false</literal>, no tables and sequences are subscribed. To
+          initiate replication, you must manually create the replication slot,
+          enable the failover if required, enable the subscription, and refresh
+          the subscription. See
           <xref linkend="logical-replication-subscription-examples-deferred-slot"/>
           for examples.
          </para>
@@ -228,7 +228,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           the initial synchronization requires all data types to have binary
           send and receive functions, otherwise the synchronization will fail
           (see <xref linkend="sql-createtype"/> for more about send/receive
-          functions).
+          functions). This parameter has no effect for sequences.
          </para>
 
          <para>
@@ -265,6 +265,12 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <literal>copy_data = true</literal> can interact with the
           <literal>origin</literal> parameter.
          </para>
+         <para>
+          See <xref linkend="sequence-definition-mismatches"/>
+          for recommendations on how to handle any warnings about sequence
+          definition differences between the publisher and the subscriber,
+          which might occur when <literal>copy_data = true</literal>.
+         </para>
         </listitem>
        </varlistentry>
 
@@ -280,6 +286,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           temporary files and applied after the transaction is committed. Note
           that if an error happens in a parallel apply worker, the finish LSN
           of the remote transaction might not be reported in the server log.
+          This parameter has no effect for sequences.
          </para>
 
          <caution>
@@ -310,7 +317,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           The value of this parameter overrides the
           <xref linkend="guc-synchronous-commit"/> setting within this
           subscription's apply worker processes.  The default value
-          is <literal>off</literal>.
+          is <literal>off</literal>. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
@@ -340,7 +348,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
         <listitem>
          <para>
           Specifies whether two-phase commit is enabled for this subscription.
-          The default is <literal>false</literal>.
+          The default is <literal>false</literal>. This parameter has no effect
+          for sequences.
          </para>
 
          <para>
@@ -398,8 +407,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
          <para>
           If true, all replication actions are performed as the subscription
           owner. If false, replication workers will perform actions on each
-          table as the owner of that table. The latter configuration is
-          generally much more secure; for details, see
+          table or sequence as the owner of that relation. The latter
+          configuration is generally much more secure; for details, see
           <xref linkend="logical-replication-security" />.
           The default is <literal>false</literal>.
          </para>
@@ -417,6 +426,7 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           changes that don't have an origin. Setting <literal>origin</literal>
           to <literal>any</literal> means that the publisher sends changes
           regardless of their origin. The default is <literal>any</literal>.
+          This parameter has no effect for sequences.
          </para>
          <para>
           See <xref linkend="sql-createsubscription-notes"/> for details of how
@@ -449,7 +459,8 @@ CREATE SUBSCRIPTION <replaceable class="parameter">subscription_name</replaceabl
           <xref linkend="conflict-update-deleted"/> is enabled, and a physical
           replication slot named <quote><literal>pg_conflict_detection</literal></quote>
           is created on the subscriber to prevent the information for detecting
-          conflicts from being removed.
+          conflicts from being removed. This parameter has no effect for
+          sequences.
          </para>
 
          <para>
-- 
2.43.0

#486Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: vignesh C (#485)
Re: Logical Replication of sequences

On Tue, 11 Nov 2025 at 16:41, vignesh C <vignesh21@gmail.com> wrote:

On Tue, 11 Nov 2025 at 11:23, Chao Li <li.evan.chao@gmail.com> wrote:

Hi Vignesh,

A few more comments:

On Nov 7, 2025, at 22:47, vignesh C <vignesh21@gmail.com> wrote:

The attached v20251107_2 version patch has the changes for the same.

Regards,
Vignesh
<v20251107_2-0001-Documentation-for-sequence-synchronizati.patch>

1
```
-        Currently, there can be only one synchronization worker per table.
+        Currently, there can be only one table synchronization worker per table
+        and one sequence synchronization worker to synchronize all sequences.
```

Feels like this statement is not accurate and leaves an impression that a table has a fixed work to serve it. So I think we can enhance this statement a little bit as:

```
Currently, only one table synchronization worker runs per table, and
only one sequence synchronization worker runs per subscription at a time.
```

Changed it slightly.

2
```
+<programlisting>
+/* sub # */ CREATE SEQUENCE s1 START WITH 10 INCREMENT BY 1
+/* sub # */ CREATE SEQUENCE s2 START WITH 100 INCREMENT BY 10;
+</programlisting></para>
```

Missed semi-colon for the first SQL statement.

Modified

3
```
+<programlisting>
+/* sub # */ SELECT srrelid::regclass, srsublsn FROM pg_subscription_rel ;
```

Unneeded white-space before the semi-colon.

Modified

4
```
+/* sub # */ SELECT * FROM s2
+ last_value | log_cnt | is_called
+------------+---------+-----------
+        610 |       0 | t
+(1 row)
```

Again, missed semi-colon.

Modified

The attached v20251111 version patch has the changes for the same.

Hi Team,

While working on another thread, I noticed a bug introduced by commit
as part of this thread.
In function pg_get_publication_tables, We have code:
```
if (pub_elem->alltables)
        pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
                               pub_elem->pubviaroot);
      else
      {
        List   *relids,
             *schemarelids;

        relids = GetPublicationRelations(pub_elem->oid,
                         pub_elem->pubviaroot ?
                         PUBLICATION_PART_ROOT :
                         PUBLICATION_PART_LEAF);
        schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,
                                pub_elem->pubviaroot ?
                                PUBLICATION_PART_ROOT :
                                PUBLICATION_PART_LEAF);
        pub_elem_tables = list_concat_unique_oid(relids, schemarelids);
      }
```

So, when we create an 'ALL SEQUENCE publication' and we execute
'SELECT * from pg_publication_tables'
We will enter the else condition in the above code, which does not
seem correct to me.
It will call functions which are not required to be called. It will
also call the function 'GetPublicationRelations' which contradicts the
comment above this function.

Similar issue is present for functions "InvalidatePubRelSyncCache" and
"AlterPublicationOptions".

Thanks,
Shlok Kyal

#487vignesh C
vignesh21@gmail.com
In reply to: Shlok Kyal (#486)
1 attachment(s)
Re: Logical Replication of sequences

On Thu, 18 Dec 2025 at 12:37, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

Hi Team,

While working on another thread, I noticed a bug introduced by commit
as part of this thread.
In function pg_get_publication_tables, We have code:
```
if (pub_elem->alltables)
pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
pub_elem->pubviaroot);
else
{
List   *relids,
*schemarelids;

relids = GetPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
pub_elem_tables = list_concat_unique_oid(relids, schemarelids);
}
```

So, when we create an 'ALL SEQUENCE publication' and we execute
'SELECT * from pg_publication_tables'
We will enter the else condition in the above code, which does not
seem correct to me.
It will call functions which are not required to be called. It will
also call the function 'GetPublicationRelations' which contradicts the
comment above this function.

Similar issue is present for functions "InvalidatePubRelSyncCache" and
"AlterPublicationOptions".

Thanks Shlok for reporting these issues, In the areas highlighted, we
can skip processing for sequences-only publications. The attached
patch implements these changes.

Regards,
Vignesh

Attachments:

0001-Skip-table-specific-handling-for-sequences-only-publ.patchtext/x-patch; charset=US-ASCII; name=0001-Skip-table-specific-handling-for-sequences-only-publ.patchDownload
From f607b671dc6c009fb7f56412840f0e6c8e368239 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 18 Dec 2025 14:59:44 +0530
Subject: [PATCH] Skip table specific handling for sequences only publications

Skip trying to get the list of tables, publish_via_partition_root
handling, and invalidation where the publication does not publish
tables. This avoids unnecessary work and ensures table specific
logic is applied only to relevant publications.
---
 src/backend/catalog/pg_publication.c   | 11 ++++++++---
 src/backend/commands/alter.c           |  7 +++++--
 src/backend/commands/publicationcmds.c |  6 +++---
 3 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 7aa3f179924..39410120f65 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1170,7 +1170,7 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			if (pub_elem->alltables)
 				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
 															 pub_elem->pubviaroot);
-			else
+			else if (!pub_elem->allsequences)
 			{
 				List	   *relids,
 						   *schemarelids;
@@ -1203,8 +1203,13 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 				table_infos = lappend(table_infos, table_info);
 			}
 
-			/* At least one publication is using publish_via_partition_root. */
-			if (pub_elem->pubviaroot)
+			/*
+			 * At least one publication is using publish_via_partition_root.
+			 * Skip sequences only publications, as publish_via_partition_root
+			 * is applicable only to table publications.
+			 */
+			if (pub_elem->pubviaroot &&
+				(!pub_elem->allsequences || pub_elem->alltables))
 				viaroot = true;
 		}
 
diff --git a/src/backend/commands/alter.c b/src/backend/commands/alter.c
index cb75e11fced..e79756af7bb 100644
--- a/src/backend/commands/alter.c
+++ b/src/backend/commands/alter.c
@@ -349,9 +349,12 @@ AlterObjectRename_internal(Relation rel, Oid objectId, const char *new_name)
 		 * Unlike ALTER PUBLICATION ADD/SET/DROP commands, renaming a
 		 * publication does not impact the publication status of tables. So,
 		 * we don't need to invalidate relcache to rebuild the rd_pubdesc.
-		 * Instead, we invalidate only the relsyncache.
+		 * Instead, invalidate the relation sync cache for publications that
+		 * include tables. Invalidation is not required for sequences only
+		 * publications.
 		 */
-		InvalidatePubRelSyncCache(pub->oid, pub->puballtables);
+		if (!pub->puballsequences || pub->puballtables)
+			InvalidatePubRelSyncCache(pub->oid, pub->puballtables);
 	}
 
 	/* Release memory */
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index a1983508950..e128789a673 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -1028,8 +1028,8 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	 * disallow using WHERE clause and column lists on partitioned table in
 	 * this case.
 	 */
-	if (!pubform->puballtables && publish_via_partition_root_given &&
-		!publish_via_partition_root)
+	if (!pubform->puballtables && !pubform->puballsequences &&
+		publish_via_partition_root_given && !publish_via_partition_root)
 	{
 		/*
 		 * Lock the publication so nobody else can do anything with it. This
@@ -1149,7 +1149,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	{
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!pubform->puballsequences)
 	{
 		List	   *relids = NIL;
 		List	   *schemarelids = NIL;
-- 
2.43.0

#488shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#487)
Re: Logical Replication of sequences

On Thu, Dec 18, 2025 at 3:04 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 18 Dec 2025 at 12:37, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

Hi Team,

While working on another thread, I noticed a bug introduced by commit
as part of this thread.
In function pg_get_publication_tables, We have code:
```
if (pub_elem->alltables)
pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
pub_elem->pubviaroot);
else
{
List   *relids,
*schemarelids;

relids = GetPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
pub_elem_tables = list_concat_unique_oid(relids, schemarelids);
}
```

So, when we create an 'ALL SEQUENCE publication' and we execute
'SELECT * from pg_publication_tables'
We will enter the else condition in the above code, which does not
seem correct to me.
It will call functions which are not required to be called. It will
also call the function 'GetPublicationRelations' which contradicts the
comment above this function.

Similar issue is present for functions "InvalidatePubRelSyncCache" and
"AlterPublicationOptions".

Thanks Shlok for reporting these issues, In the areas highlighted, we
can skip processing for sequences-only publications. The attached
patch implements these changes.

We have '!pub->alltables' check at 2 more places:
-- pgoutput_row_filter_init()
-- pg_get_publication_tables()

Do we need to worry about '!allsequences' at these 2 places as well?

thanks
Shveta

#489vignesh C
vignesh21@gmail.com
In reply to: shveta malik (#488)
Re: Logical Replication of sequences

On Thu, 18 Dec 2025 at 17:03, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Dec 18, 2025 at 3:04 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 18 Dec 2025 at 12:37, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

Hi Team,

While working on another thread, I noticed a bug introduced by commit
as part of this thread.
In function pg_get_publication_tables, We have code:
```
if (pub_elem->alltables)
pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
pub_elem->pubviaroot);
else
{
List   *relids,
*schemarelids;

relids = GetPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
pub_elem_tables = list_concat_unique_oid(relids, schemarelids);
}
```

So, when we create an 'ALL SEQUENCE publication' and we execute
'SELECT * from pg_publication_tables'
We will enter the else condition in the above code, which does not
seem correct to me.
It will call functions which are not required to be called. It will
also call the function 'GetPublicationRelations' which contradicts the
comment above this function.

Similar issue is present for functions "InvalidatePubRelSyncCache" and
"AlterPublicationOptions".

Thanks Shlok for reporting these issues, In the areas highlighted, we
can skip processing for sequences-only publications. The attached
patch implements these changes.

We have '!pub->alltables' check at 2 more places:
-- pgoutput_row_filter_init()

It's not necessary here, because only publications that actually
replicate the given relation are passed to pgoutput_row_filter_init.
Sequence publications are excluded from this list, so they will never
appear there.

-- pg_get_publication_tables()

We don't need to explicitly check !pub->allsequences here. The guard
if (funcctx->call_cntr < list_length(table_infos))
already ensures that the remaining code is executed only when
table_infos contains elements. If the list has entries, it necessarily
represents a table publication, and it will enumerate those tables. If
the list is empty, the condition fails and no rows are returned, so
the extra check is not required.

Regards,
Vignesh

#490shveta malik
shveta.malik@gmail.com
In reply to: vignesh C (#489)
Re: Logical Replication of sequences

On Thu, Dec 18, 2025 at 5:30 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 18 Dec 2025 at 17:03, shveta malik <shveta.malik@gmail.com> wrote:

On Thu, Dec 18, 2025 at 3:04 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, 18 Dec 2025 at 12:37, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

Hi Team,

While working on another thread, I noticed a bug introduced by commit
as part of this thread.
In function pg_get_publication_tables, We have code:
```
if (pub_elem->alltables)
pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
pub_elem->pubviaroot);
else
{
List   *relids,
*schemarelids;

relids = GetPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
pub_elem_tables = list_concat_unique_oid(relids, schemarelids);
}
```

So, when we create an 'ALL SEQUENCE publication' and we execute
'SELECT * from pg_publication_tables'
We will enter the else condition in the above code, which does not
seem correct to me.
It will call functions which are not required to be called. It will
also call the function 'GetPublicationRelations' which contradicts the
comment above this function.

Similar issue is present for functions "InvalidatePubRelSyncCache" and
"AlterPublicationOptions".

Thanks Shlok for reporting these issues, In the areas highlighted, we
can skip processing for sequences-only publications. The attached
patch implements these changes.

We have '!pub->alltables' check at 2 more places:
-- pgoutput_row_filter_init()

It's not necessary here, because only publications that actually
replicate the given relation are passed to pgoutput_row_filter_init.
Sequence publications are excluded from this list, so they will never
appear there.

-- pg_get_publication_tables()

We don't need to explicitly check !pub->allsequences here. The guard
if (funcctx->call_cntr < list_length(table_infos))
already ensures that the remaining code is executed only when
table_infos contains elements. If the list has entries, it necessarily
represents a table publication, and it will enumerate those tables. If
the list is empty, the condition fails and no rows are returned, so
the extra check is not required.

Okay, the patch LGTM.

thanks
Shveta

#491Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#487)
Re: Logical Replication of sequences

Hi Vignesh.

I had a quick look at this patch.

======

- * Instead, we invalidate only the relsyncache.
+ * Instead, invalidate the relation sync cache for publications that
+ * include tables. Invalidation is not required for sequences only
+ * publications.
  */
- InvalidatePubRelSyncCache(pub->oid, pub->puballtables);
+ if (!pub->puballsequences || pub->puballtables)
+ InvalidatePubRelSyncCache(pub->oid, pub->puballtables);

I felt all these "sequence only" conditions are becoming difficult to read.

It wonder if it would be better to introduce some function like:

bool
PubHasSequencesOnly(pub)
{
return pub->puballsequences && !pub->puballtables;
}

IIUC the current patch code only works because publication syntax like
below is not yet supported:

CREATE PUBLICATION pub1 FOR ALL SEQUENCES, TABLE t1;

But when that syntax does become possible, then all these conditions
will be broken.

Should we prepare for that eventuality by introducing some function
right now, so as to contain all the future broken code?

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#492vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#491)
1 attachment(s)
Re: Logical Replication of sequences

On Fri, 19 Dec 2025 at 10:05, Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh.

I had a quick look at this patch.

======

- * Instead, we invalidate only the relsyncache.
+ * Instead, invalidate the relation sync cache for publications that
+ * include tables. Invalidation is not required for sequences only
+ * publications.
*/
- InvalidatePubRelSyncCache(pub->oid, pub->puballtables);
+ if (!pub->puballsequences || pub->puballtables)
+ InvalidatePubRelSyncCache(pub->oid, pub->puballtables);

I felt all these "sequence only" conditions are becoming difficult to read.

It wonder if it would be better to introduce some function like:

bool
PubHasSequencesOnly(pub)
{
return pub->puballsequences && !pub->puballtables;
}

I added a macro following a similar pattern. However, since the
function calls use different data types, I modified it to pass
individual members instead of the structure.

IIUC the current patch code only works because publication syntax like
below is not yet supported:

CREATE PUBLICATION pub1 FOR ALL SEQUENCES, TABLE t1;

But when that syntax does become possible, then all these conditions
will be broken.

Should we prepare for that eventuality by introducing some function
right now, so as to contain all the future broken code?

I believe no specific handling is required. These flows will be
naturally addressed as part of the feature implementation.

The attached patch has the changes for the comments.

Regards,
Vignesh

Attachments:

v2-0001-Skip-table-specific-handling-for-sequences-only-p.patchtext/x-patch; charset=US-ASCII; name=v2-0001-Skip-table-specific-handling-for-sequences-only-p.patchDownload
From 0fe834fbc5794bb43fbaffc912564f9bc6e6ebb1 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignesh21@gmail.com>
Date: Thu, 18 Dec 2025 14:59:44 +0530
Subject: [PATCH v2] Skip table specific handling for sequences only
 publications

Skip trying to get the list of tables, publish_via_partition_root
handling, and invalidation where the publication does not publish
tables. This avoids unnecessary work and ensures table specific
logic is applied only to relevant publications.
---
 src/backend/catalog/pg_publication.c   | 11 ++++++++---
 src/backend/commands/alter.c           |  6 ++++--
 src/backend/commands/publicationcmds.c |  6 +++---
 src/include/catalog/pg_publication.h   |  6 ++++++
 4 files changed, 21 insertions(+), 8 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 7aa3f179924..6ce745f379b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1170,7 +1170,7 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 			if (pub_elem->alltables)
 				pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
 															 pub_elem->pubviaroot);
-			else
+			else if (!pub_elem->allsequences)
 			{
 				List	   *relids,
 						   *schemarelids;
@@ -1203,8 +1203,13 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
 				table_infos = lappend(table_infos, table_info);
 			}
 
-			/* At least one publication is using publish_via_partition_root. */
-			if (pub_elem->pubviaroot)
+			/*
+			 * At least one publication is using publish_via_partition_root.
+			 * Skip sequences only publications, as publish_via_partition_root
+			 * is applicable only to table publications.
+			 */
+			if (pub_elem->pubviaroot && !PUB_HAS_SEQUENCES_ONLY(pub_elem->allsequences,
+																pub_elem->alltables))
 				viaroot = true;
 		}
 
diff --git a/src/backend/commands/alter.c b/src/backend/commands/alter.c
index cb75e11fced..a6ce56b2544 100644
--- a/src/backend/commands/alter.c
+++ b/src/backend/commands/alter.c
@@ -349,9 +349,11 @@ AlterObjectRename_internal(Relation rel, Oid objectId, const char *new_name)
 		 * Unlike ALTER PUBLICATION ADD/SET/DROP commands, renaming a
 		 * publication does not impact the publication status of tables. So,
 		 * we don't need to invalidate relcache to rebuild the rd_pubdesc.
-		 * Instead, we invalidate only the relsyncache.
+		 * Instead, we invalidate only the relsyncache. Invalidation is not
+		 * required for sequences only publications.
 		 */
-		InvalidatePubRelSyncCache(pub->oid, pub->puballtables);
+		if (!PUB_HAS_SEQUENCES_ONLY(pub->puballsequences, pub->puballtables))
+			InvalidatePubRelSyncCache(pub->oid, pub->puballtables);
 	}
 
 	/* Release memory */
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index a1983508950..e128789a673 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -1028,8 +1028,8 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	 * disallow using WHERE clause and column lists on partitioned table in
 	 * this case.
 	 */
-	if (!pubform->puballtables && publish_via_partition_root_given &&
-		!publish_via_partition_root)
+	if (!pubform->puballtables && !pubform->puballsequences &&
+		publish_via_partition_root_given && !publish_via_partition_root)
 	{
 		/*
 		 * Lock the publication so nobody else can do anything with it. This
@@ -1149,7 +1149,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
 	{
 		CacheInvalidateRelcacheAll();
 	}
-	else
+	else if (!pubform->puballsequences)
 	{
 		List	   *relids = NIL;
 		List	   *schemarelids = NIL;
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 22f48bb8975..cabc509e659 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -148,6 +148,12 @@ typedef struct PublicationRelInfo
 	List	   *columns;
 } PublicationRelInfo;
 
+/*
+ * Publication is defined for all sequences only (no tables included).
+ */
+#define PUB_HAS_SEQUENCES_ONLY(allsequences, alltables) \
+	((allsequences) && !(alltables))
+
 extern Publication *GetPublication(Oid pubid);
 extern Publication *GetPublicationByName(const char *pubname, bool missing_ok);
 extern List *GetRelationPublications(Oid relid);
-- 
2.43.0

#493Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#492)
Re: Logical Replication of sequences

Hi Vignesh,

A couple of review comments for v2-0001

======
src/backend/catalog/pg_publication.c

pg_get_publication_tables:

1.
if (pub_elem->alltables)
pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
pub_elem->pubviaroot);
- else
+ else if (!pub_elem->allsequences)
{
List *relids,
*schemarelids;
@@ -1203,8 +1203,13 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
table_infos = lappend(table_infos, table_info);
}

- /* At least one publication is using publish_via_partition_root. */
- if (pub_elem->pubviaroot)
+ /*
+ * At least one publication is using publish_via_partition_root.
+ * Skip sequences only publications, as publish_via_partition_root
+ * is applicable only to table publications.
+ */
+ if (pub_elem->pubviaroot && !PUB_HAS_SEQUENCES_ONLY(pub_elem->allsequences,
+ pub_elem->alltables))
  viaroot = true;

Won't it be simpler to check this up-front and then just 'continue'?
Then you wouldn't have to handle "sequence only" for the rest of the
loop logic.

e.g.

pub_elem = ...

/* Skip this publication if no TABLES are published. */
if (PUB_HAS_SEQUENCES_ONLY(pub_elem->allsequences, pub_elem->alltables)
continue;

if (pub_elem->alltables)
...
else
...

======
src/backend/commands/publicationcmds.c

2.
- if (!pubform->puballtables && publish_via_partition_root_given &&
- !publish_via_partition_root)
+ if (!pubform->puballtables && !pubform->puballsequences &&
+ publish_via_partition_root_given && !publish_via_partition_root)

I felt this modified condition ought to be expressed as:

if (!PUB_HAS_SEQUENCES_ONLY(...) && <original condition>

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#494shveta malik
shveta.malik@gmail.com
In reply to: Peter Smith (#493)
Re: Logical Replication of sequences

On Mon, Dec 22, 2025 at 4:41 AM Peter Smith <smithpb2250@gmail.com> wrote:

Hi Vignesh,

A couple of review comments for v2-0001

======
src/backend/catalog/pg_publication.c

pg_get_publication_tables:

1.
if (pub_elem->alltables)
pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
pub_elem->pubviaroot);
- else
+ else if (!pub_elem->allsequences)
{
List *relids,
*schemarelids;
@@ -1203,8 +1203,13 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
table_infos = lappend(table_infos, table_info);
}

- /* At least one publication is using publish_via_partition_root. */
- if (pub_elem->pubviaroot)
+ /*
+ * At least one publication is using publish_via_partition_root.
+ * Skip sequences only publications, as publish_via_partition_root
+ * is applicable only to table publications.
+ */
+ if (pub_elem->pubviaroot && !PUB_HAS_SEQUENCES_ONLY(pub_elem->allsequences,
+ pub_elem->alltables))
viaroot = true;

Won't it be simpler to check this up-front and then just 'continue'?
Then you wouldn't have to handle "sequence only" for the rest of the
loop logic.

+1. It will simplify the code.

Show quoted text

e.g.

pub_elem = ...

/* Skip this publication if no TABLES are published. */
if (PUB_HAS_SEQUENCES_ONLY(pub_elem->allsequences, pub_elem->alltables)
continue;

if (pub_elem->alltables)
...
else
...

======
src/backend/commands/publicationcmds.c

2.
- if (!pubform->puballtables && publish_via_partition_root_given &&
- !publish_via_partition_root)
+ if (!pubform->puballtables && !pubform->puballsequences &&
+ publish_via_partition_root_given && !publish_via_partition_root)

I felt this modified condition ought to be expressed as:

if (!PUB_HAS_SEQUENCES_ONLY(...) && <original condition>

======
Kind Regards,
Peter Smith.
Fujitsu Australia

#495Amit Kapila
amit.kapila16@gmail.com
In reply to: Shlok Kyal (#486)
Re: Logical Replication of sequences

On Thu, Dec 18, 2025 at 12:37 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

While working on another thread, I noticed a bug introduced by commit
as part of this thread.
In function pg_get_publication_tables, We have code:
```
if (pub_elem->alltables)
pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
pub_elem->pubviaroot);
else
{
List   *relids,
*schemarelids;

relids = GetPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
pub_elem_tables = list_concat_unique_oid(relids, schemarelids);
}
```

So, when we create an 'ALL SEQUENCE publication' and we execute
'SELECT * from pg_publication_tables'
We will enter the else condition in the above code, which does not
seem correct to me.
It will call functions which are not required to be called. It will
also call the function 'GetPublicationRelations' which contradicts the
comment above this function.

I see that we will needlessly call GetPublicationRelations or others
for all_schema publication but is there any problem/bug due to that?
AFAICS, the function will still return correct results. Yes, there is
an argument to better performance for large numbers of all_sequence
publications and that too in DDL like Create/Alter Subscription. I am
not sure that it is really worth adding more checks at multiple places
in the code though we can improve comments atop
GetPublicationRelations. I feel if we encounter such cases in the
field then it makes sense to add these additional optimizations at
various places.

--
With Regards,
Amit Kapila.

#496Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: Amit Kapila (#495)
1 attachment(s)
Re: Logical Replication of sequences

On Mon, 22 Dec 2025 at 11:08, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Dec 18, 2025 at 12:37 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

While working on another thread, I noticed a bug introduced by commit
as part of this thread.
In function pg_get_publication_tables, We have code:
```
if (pub_elem->alltables)
pub_elem_tables = GetAllPublicationRelations(RELKIND_RELATION,
pub_elem->pubviaroot);
else
{
List   *relids,
*schemarelids;

relids = GetPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
pub_elem_tables = list_concat_unique_oid(relids, schemarelids);
}
```

So, when we create an 'ALL SEQUENCE publication' and we execute
'SELECT * from pg_publication_tables'
We will enter the else condition in the above code, which does not
seem correct to me.
It will call functions which are not required to be called. It will
also call the function 'GetPublicationRelations' which contradicts the
comment above this function.

I see that we will needlessly call GetPublicationRelations or others
for all_schema publication but is there any problem/bug due to that?

No, I did not encounter a problem/bug.

AFAICS, the function will still return correct results. Yes, there is
an argument to better performance for large numbers of all_sequence
publications and that too in DDL like Create/Alter Subscription. I am
not sure that it is really worth adding more checks at multiple places
in the code though we can improve comments atop
GetPublicationRelations. I feel if we encounter such cases in the
field then it makes sense to add these additional optimizations at
various places.

I agree with you. And attached a patch to modify the comment above
GetPublicationRelations.

Thanks,
Shlok Kyal

Attachments:

v1-0001-Improve-comment-for-GetPublicationRelations.patchapplication/octet-stream; name=v1-0001-Improve-comment-for-GetPublicationRelations.patchDownload
From c6b830e45000f5b879574e1147c498d2c8891594 Mon Sep 17 00:00:00 2001
From: Shlok Kyal <shlok.kyal.oss@gmail.com>
Date: Wed, 24 Dec 2025 12:04:48 +0530
Subject: [PATCH v1] Improve comment for GetPublicationRelations

The comment for GetPublicationRelations previously stated that the
function should only be used for FOR TABLE publications. However, the
function can also be invoked for FOR ALL SEQUENCES publications in some
cases.

Update the comment to reflect the current behavior when the function is
called for FOR ALL TABLES or FOR ALL SEQUENCES publications.
---
 src/backend/catalog/pg_publication.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 7aa3f179924..46183f1fba7 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -776,8 +776,12 @@ GetRelationPublications(Oid relid)
 /*
  * Gets list of relation oids for a publication.
  *
- * This should only be used FOR TABLE publications, the FOR ALL TABLES/SEQUENCES
- * should use GetAllPublicationRelations().
+ * This is mainly used for FOR TABLE publications. When invoked for
+ * FOR ALL SEQUENCES or FOR ALL TABLES publications, the result is an empty
+ * list.
+ *
+ * Use GetAllPublicationRelations() for FOR ALL TABLES or FOR ALL SEQUENCES
+ * publications.
  */
 List *
 GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
-- 
2.34.1